Test Report: KVM_Linux_crio 19780

                    
                      d63f64bffc284d34b6c2581e44dece8bfcca0b7a:2024-10-09:36574
                    
                

Test fail (33/312)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.58
35 TestAddons/parallel/Ingress 153.32
37 TestAddons/parallel/MetricsServer 320.51
45 TestAddons/StoppedEnableDisable 154.25
164 TestMultiControlPlane/serial/StopSecondaryNode 141.46
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.64
166 TestMultiControlPlane/serial/RestartSecondaryNode 6.46
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.25
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 400.24
171 TestMultiControlPlane/serial/StopCluster 142
231 TestMultiNode/serial/RestartKeepsNodes 327.27
233 TestMultiNode/serial/StopMultiNode 145.17
240 TestPreload 272.41
248 TestKubernetesUpgrade 394.31
262 TestPause/serial/SecondStartNoReconfiguration 61.71
280 TestStartStop/group/old-k8s-version/serial/FirstStart 288.56
292 TestStartStop/group/no-preload/serial/Stop 138.93
297 TestStartStop/group/embed-certs/serial/Stop 139.07
300 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.1
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
302 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 116.88
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/SecondStart 721.85
312 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.19
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.04
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.06
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.25
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 413.7
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 479.46
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 347.02
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 128.68
x
+
TestAddons/serial/GCPAuth/PullSecret (480.58s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-421083 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-421083 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fc74ccb7-748c-4810-bb45-a1431c16ef61] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-421083 -n addons-421083
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-09 18:57:52.159998012 +0000 UTC m=+683.149770929
addons_test.go:627: (dbg) Run:  kubectl --context addons-421083 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-421083 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-421083/192.168.39.156
Start Time:       Wed, 09 Oct 2024 18:49:51 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.22
IPs:
IP:  10.244.0.22
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mzz6f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-mzz6f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  8m1s                   default-scheduler  Successfully assigned default/busybox to addons-421083
Normal   Pulling    6m32s (x4 over 8m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m32s (x4 over 8m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m32s (x4 over 8m)     kubelet            Error: ErrImagePull
Warning  Failed     6m20s (x6 over 7m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    3m (x21 over 7m59s)    kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-421083 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-421083 logs busybox -n default: exit status 1 (70.84294ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-421083 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.58s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-421083 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-421083 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-421083 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6f15c371-9273-4816-8120-e41e8534ec18] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6f15c371-9273-4816-8120-e41e8534ec18] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005002062s
I1009 18:58:59.757209   16607 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-421083 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.848194011s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-421083 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.156
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-421083 -n addons-421083
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 logs -n 25: (1.281307021s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| delete  | -p download-only-988518                                                                     | download-only-988518 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| delete  | -p download-only-944932                                                                     | download-only-944932 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| delete  | -p download-only-988518                                                                     | download-only-988518 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-505183 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC |                     |
	|         | binary-mirror-505183                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43333                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-505183                                                                     | binary-mirror-505183 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC |                     |
	|         | addons-421083                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC |                     |
	|         | addons-421083                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-421083 --wait=true                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:49 UTC | 09 Oct 24 18:49 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | -p addons-421083                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-421083 ssh cat                                                                       | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | /opt/local-path-provisioner/pvc-e5d4b64b-252d-4269-93cd-d7941b14a023_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-421083 ip                                                                            | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-421083 ssh curl -s                                                                   | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-421083 ip                                                                            | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 19:01 UTC | 09 Oct 24 19:01 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:47:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:47:36.919131   17401 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:47:36.919268   17401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:36.919278   17401 out.go:358] Setting ErrFile to fd 2...
	I1009 18:47:36.919285   17401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:36.919470   17401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 18:47:36.920079   17401 out.go:352] Setting JSON to false
	I1009 18:47:36.920885   17401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1798,"bootTime":1728497859,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:47:36.920984   17401 start.go:139] virtualization: kvm guest
	I1009 18:47:36.922972   17401 out.go:177] * [addons-421083] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 18:47:36.924203   17401 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:47:36.924202   17401 notify.go:220] Checking for updates...
	I1009 18:47:36.925482   17401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:47:36.926648   17401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 18:47:36.927811   17401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 18:47:36.928991   17401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:47:36.930220   17401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:47:36.931405   17401 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:47:36.962269   17401 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 18:47:36.963305   17401 start.go:297] selected driver: kvm2
	I1009 18:47:36.963317   17401 start.go:901] validating driver "kvm2" against <nil>
	I1009 18:47:36.963327   17401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:47:36.964029   17401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:47:36.964104   17401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:47:36.978384   17401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 18:47:36.978421   17401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:47:36.978675   17401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:47:36.978709   17401 cni.go:84] Creating CNI manager for ""
	I1009 18:47:36.978771   17401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:47:36.978779   17401 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:47:36.978836   17401 start.go:340] cluster config:
	{Name:addons-421083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:36.978944   17401 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:47:36.980666   17401 out.go:177] * Starting "addons-421083" primary control-plane node in "addons-421083" cluster
	I1009 18:47:36.981893   17401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:36.981928   17401 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:47:36.981938   17401 cache.go:56] Caching tarball of preloaded images
	I1009 18:47:36.982016   17401 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:47:36.982027   17401 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 18:47:36.982319   17401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/config.json ...
	I1009 18:47:36.982338   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/config.json: {Name:mk8bd821ac2bab660fc018f0f8c608bab2497d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:36.982466   17401 start.go:360] acquireMachinesLock for addons-421083: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:47:36.982509   17401 start.go:364] duration metric: took 31.338µs to acquireMachinesLock for "addons-421083"
	I1009 18:47:36.982525   17401 start.go:93] Provisioning new machine with config: &{Name:addons-421083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:47:36.982580   17401 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 18:47:36.984137   17401 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1009 18:47:36.984283   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:47:36.984321   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:47:36.997940   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I1009 18:47:36.998290   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:47:36.998850   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:47:36.998899   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:47:36.999274   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:47:36.999445   17401 main.go:141] libmachine: (addons-421083) Calling .GetMachineName
	I1009 18:47:36.999563   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:47:36.999716   17401 start.go:159] libmachine.API.Create for "addons-421083" (driver="kvm2")
	I1009 18:47:36.999745   17401 client.go:168] LocalClient.Create starting
	I1009 18:47:36.999785   17401 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 18:47:37.331686   17401 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 18:47:37.435435   17401 main.go:141] libmachine: Running pre-create checks...
	I1009 18:47:37.435458   17401 main.go:141] libmachine: (addons-421083) Calling .PreCreateCheck
	I1009 18:47:37.435983   17401 main.go:141] libmachine: (addons-421083) Calling .GetConfigRaw
	I1009 18:47:37.436428   17401 main.go:141] libmachine: Creating machine...
	I1009 18:47:37.436443   17401 main.go:141] libmachine: (addons-421083) Calling .Create
	I1009 18:47:37.436583   17401 main.go:141] libmachine: (addons-421083) Creating KVM machine...
	I1009 18:47:37.437676   17401 main.go:141] libmachine: (addons-421083) DBG | found existing default KVM network
	I1009 18:47:37.438360   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:37.438220   17423 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I1009 18:47:37.438405   17401 main.go:141] libmachine: (addons-421083) DBG | created network xml: 
	I1009 18:47:37.438425   17401 main.go:141] libmachine: (addons-421083) DBG | <network>
	I1009 18:47:37.438435   17401 main.go:141] libmachine: (addons-421083) DBG |   <name>mk-addons-421083</name>
	I1009 18:47:37.438445   17401 main.go:141] libmachine: (addons-421083) DBG |   <dns enable='no'/>
	I1009 18:47:37.438452   17401 main.go:141] libmachine: (addons-421083) DBG |   
	I1009 18:47:37.438465   17401 main.go:141] libmachine: (addons-421083) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 18:47:37.438475   17401 main.go:141] libmachine: (addons-421083) DBG |     <dhcp>
	I1009 18:47:37.438482   17401 main.go:141] libmachine: (addons-421083) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 18:47:37.438494   17401 main.go:141] libmachine: (addons-421083) DBG |     </dhcp>
	I1009 18:47:37.438507   17401 main.go:141] libmachine: (addons-421083) DBG |   </ip>
	I1009 18:47:37.438517   17401 main.go:141] libmachine: (addons-421083) DBG |   
	I1009 18:47:37.438527   17401 main.go:141] libmachine: (addons-421083) DBG | </network>
	I1009 18:47:37.438535   17401 main.go:141] libmachine: (addons-421083) DBG | 
	I1009 18:47:37.443692   17401 main.go:141] libmachine: (addons-421083) DBG | trying to create private KVM network mk-addons-421083 192.168.39.0/24...
	I1009 18:47:37.506082   17401 main.go:141] libmachine: (addons-421083) DBG | private KVM network mk-addons-421083 192.168.39.0/24 created
	I1009 18:47:37.506113   17401 main.go:141] libmachine: (addons-421083) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083 ...
	I1009 18:47:37.506128   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:37.506023   17423 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 18:47:37.506154   17401 main.go:141] libmachine: (addons-421083) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 18:47:37.506330   17401 main.go:141] libmachine: (addons-421083) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 18:47:37.766177   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:37.766047   17423 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa...
	I1009 18:47:38.007798   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:38.007670   17423 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/addons-421083.rawdisk...
	I1009 18:47:38.007832   17401 main.go:141] libmachine: (addons-421083) DBG | Writing magic tar header
	I1009 18:47:38.007847   17401 main.go:141] libmachine: (addons-421083) DBG | Writing SSH key tar header
	I1009 18:47:38.007859   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:38.007787   17423 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083 ...
	I1009 18:47:38.007876   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083
	I1009 18:47:38.007949   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083 (perms=drwx------)
	I1009 18:47:38.007978   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 18:47:38.007989   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 18:47:38.008002   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 18:47:38.008012   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 18:47:38.008023   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 18:47:38.008033   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 18:47:38.008047   17401 main.go:141] libmachine: (addons-421083) Creating domain...
	I1009 18:47:38.008061   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 18:47:38.008075   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 18:47:38.008084   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 18:47:38.008093   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins
	I1009 18:47:38.008101   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home
	I1009 18:47:38.008110   17401 main.go:141] libmachine: (addons-421083) DBG | Skipping /home - not owner
	I1009 18:47:38.009054   17401 main.go:141] libmachine: (addons-421083) define libvirt domain using xml: 
	I1009 18:47:38.009087   17401 main.go:141] libmachine: (addons-421083) <domain type='kvm'>
	I1009 18:47:38.009109   17401 main.go:141] libmachine: (addons-421083)   <name>addons-421083</name>
	I1009 18:47:38.009119   17401 main.go:141] libmachine: (addons-421083)   <memory unit='MiB'>4000</memory>
	I1009 18:47:38.009127   17401 main.go:141] libmachine: (addons-421083)   <vcpu>2</vcpu>
	I1009 18:47:38.009131   17401 main.go:141] libmachine: (addons-421083)   <features>
	I1009 18:47:38.009151   17401 main.go:141] libmachine: (addons-421083)     <acpi/>
	I1009 18:47:38.009165   17401 main.go:141] libmachine: (addons-421083)     <apic/>
	I1009 18:47:38.009172   17401 main.go:141] libmachine: (addons-421083)     <pae/>
	I1009 18:47:38.009177   17401 main.go:141] libmachine: (addons-421083)     
	I1009 18:47:38.009182   17401 main.go:141] libmachine: (addons-421083)   </features>
	I1009 18:47:38.009189   17401 main.go:141] libmachine: (addons-421083)   <cpu mode='host-passthrough'>
	I1009 18:47:38.009194   17401 main.go:141] libmachine: (addons-421083)   
	I1009 18:47:38.009202   17401 main.go:141] libmachine: (addons-421083)   </cpu>
	I1009 18:47:38.009207   17401 main.go:141] libmachine: (addons-421083)   <os>
	I1009 18:47:38.009214   17401 main.go:141] libmachine: (addons-421083)     <type>hvm</type>
	I1009 18:47:38.009228   17401 main.go:141] libmachine: (addons-421083)     <boot dev='cdrom'/>
	I1009 18:47:38.009238   17401 main.go:141] libmachine: (addons-421083)     <boot dev='hd'/>
	I1009 18:47:38.009262   17401 main.go:141] libmachine: (addons-421083)     <bootmenu enable='no'/>
	I1009 18:47:38.009290   17401 main.go:141] libmachine: (addons-421083)   </os>
	I1009 18:47:38.009299   17401 main.go:141] libmachine: (addons-421083)   <devices>
	I1009 18:47:38.009307   17401 main.go:141] libmachine: (addons-421083)     <disk type='file' device='cdrom'>
	I1009 18:47:38.009318   17401 main.go:141] libmachine: (addons-421083)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/boot2docker.iso'/>
	I1009 18:47:38.009326   17401 main.go:141] libmachine: (addons-421083)       <target dev='hdc' bus='scsi'/>
	I1009 18:47:38.009331   17401 main.go:141] libmachine: (addons-421083)       <readonly/>
	I1009 18:47:38.009336   17401 main.go:141] libmachine: (addons-421083)     </disk>
	I1009 18:47:38.009343   17401 main.go:141] libmachine: (addons-421083)     <disk type='file' device='disk'>
	I1009 18:47:38.009351   17401 main.go:141] libmachine: (addons-421083)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 18:47:38.009363   17401 main.go:141] libmachine: (addons-421083)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/addons-421083.rawdisk'/>
	I1009 18:47:38.009370   17401 main.go:141] libmachine: (addons-421083)       <target dev='hda' bus='virtio'/>
	I1009 18:47:38.009384   17401 main.go:141] libmachine: (addons-421083)     </disk>
	I1009 18:47:38.009402   17401 main.go:141] libmachine: (addons-421083)     <interface type='network'>
	I1009 18:47:38.009415   17401 main.go:141] libmachine: (addons-421083)       <source network='mk-addons-421083'/>
	I1009 18:47:38.009424   17401 main.go:141] libmachine: (addons-421083)       <model type='virtio'/>
	I1009 18:47:38.009430   17401 main.go:141] libmachine: (addons-421083)     </interface>
	I1009 18:47:38.009436   17401 main.go:141] libmachine: (addons-421083)     <interface type='network'>
	I1009 18:47:38.009442   17401 main.go:141] libmachine: (addons-421083)       <source network='default'/>
	I1009 18:47:38.009448   17401 main.go:141] libmachine: (addons-421083)       <model type='virtio'/>
	I1009 18:47:38.009453   17401 main.go:141] libmachine: (addons-421083)     </interface>
	I1009 18:47:38.009459   17401 main.go:141] libmachine: (addons-421083)     <serial type='pty'>
	I1009 18:47:38.009465   17401 main.go:141] libmachine: (addons-421083)       <target port='0'/>
	I1009 18:47:38.009477   17401 main.go:141] libmachine: (addons-421083)     </serial>
	I1009 18:47:38.009488   17401 main.go:141] libmachine: (addons-421083)     <console type='pty'>
	I1009 18:47:38.009505   17401 main.go:141] libmachine: (addons-421083)       <target type='serial' port='0'/>
	I1009 18:47:38.009516   17401 main.go:141] libmachine: (addons-421083)     </console>
	I1009 18:47:38.009521   17401 main.go:141] libmachine: (addons-421083)     <rng model='virtio'>
	I1009 18:47:38.009527   17401 main.go:141] libmachine: (addons-421083)       <backend model='random'>/dev/random</backend>
	I1009 18:47:38.009533   17401 main.go:141] libmachine: (addons-421083)     </rng>
	I1009 18:47:38.009547   17401 main.go:141] libmachine: (addons-421083)     
	I1009 18:47:38.009559   17401 main.go:141] libmachine: (addons-421083)     
	I1009 18:47:38.009566   17401 main.go:141] libmachine: (addons-421083)   </devices>
	I1009 18:47:38.009570   17401 main.go:141] libmachine: (addons-421083) </domain>
	I1009 18:47:38.009576   17401 main.go:141] libmachine: (addons-421083) 
	I1009 18:47:38.015758   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:91:11:d6 in network default
	I1009 18:47:38.016255   17401 main.go:141] libmachine: (addons-421083) Ensuring networks are active...
	I1009 18:47:38.016273   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:38.016892   17401 main.go:141] libmachine: (addons-421083) Ensuring network default is active
	I1009 18:47:38.017146   17401 main.go:141] libmachine: (addons-421083) Ensuring network mk-addons-421083 is active
	I1009 18:47:38.018465   17401 main.go:141] libmachine: (addons-421083) Getting domain xml...
	I1009 18:47:38.019101   17401 main.go:141] libmachine: (addons-421083) Creating domain...
	I1009 18:47:39.430327   17401 main.go:141] libmachine: (addons-421083) Waiting to get IP...
	I1009 18:47:39.431067   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:39.431443   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:39.431503   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:39.431440   17423 retry.go:31] will retry after 262.024745ms: waiting for machine to come up
	I1009 18:47:39.695075   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:39.695601   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:39.695630   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:39.695542   17423 retry.go:31] will retry after 388.91699ms: waiting for machine to come up
	I1009 18:47:40.086047   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:40.086501   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:40.086536   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:40.086453   17423 retry.go:31] will retry after 325.478066ms: waiting for machine to come up
	I1009 18:47:40.414233   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:40.414744   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:40.414767   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:40.414704   17423 retry.go:31] will retry after 425.338344ms: waiting for machine to come up
	I1009 18:47:40.841260   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:40.841780   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:40.841819   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:40.841733   17423 retry.go:31] will retry after 735.054961ms: waiting for machine to come up
	I1009 18:47:41.578571   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:41.578975   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:41.578999   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:41.578938   17423 retry.go:31] will retry after 879.023333ms: waiting for machine to come up
	I1009 18:47:42.459480   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:42.460097   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:42.460126   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:42.460058   17423 retry.go:31] will retry after 1.0961467s: waiting for machine to come up
	I1009 18:47:43.558333   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:43.558716   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:43.558746   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:43.558674   17423 retry.go:31] will retry after 1.435955653s: waiting for machine to come up
	I1009 18:47:44.996421   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:44.996783   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:44.996809   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:44.996743   17423 retry.go:31] will retry after 1.468799411s: waiting for machine to come up
	I1009 18:47:46.466652   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:46.467054   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:46.467080   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:46.467019   17423 retry.go:31] will retry after 1.987591191s: waiting for machine to come up
	I1009 18:47:48.457235   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:48.457690   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:48.457718   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:48.457639   17423 retry.go:31] will retry after 2.254440714s: waiting for machine to come up
	I1009 18:47:50.713161   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:50.713641   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:50.713666   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:50.713606   17423 retry.go:31] will retry after 2.487139058s: waiting for machine to come up
	I1009 18:47:53.202934   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:53.203455   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:53.203495   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:53.203405   17423 retry.go:31] will retry after 3.308396575s: waiting for machine to come up
	I1009 18:47:56.515692   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:56.516102   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:56.516124   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:56.516062   17423 retry.go:31] will retry after 4.310196536s: waiting for machine to come up
	I1009 18:48:00.830339   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:00.830821   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has current primary IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:00.830842   17401 main.go:141] libmachine: (addons-421083) Found IP for machine: 192.168.39.156
	I1009 18:48:00.830877   17401 main.go:141] libmachine: (addons-421083) Reserving static IP address...
	I1009 18:48:00.831223   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find host DHCP lease matching {name: "addons-421083", mac: "52:54:00:90:f5:45", ip: "192.168.39.156"} in network mk-addons-421083
	I1009 18:48:00.898325   17401 main.go:141] libmachine: (addons-421083) DBG | Getting to WaitForSSH function...
	I1009 18:48:00.898358   17401 main.go:141] libmachine: (addons-421083) Reserved static IP address: 192.168.39.156
	I1009 18:48:00.898370   17401 main.go:141] libmachine: (addons-421083) Waiting for SSH to be available...
	I1009 18:48:00.900672   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:00.901110   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:minikube Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:00.901139   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:00.901342   17401 main.go:141] libmachine: (addons-421083) DBG | Using SSH client type: external
	I1009 18:48:00.901368   17401 main.go:141] libmachine: (addons-421083) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa (-rw-------)
	I1009 18:48:00.901402   17401 main.go:141] libmachine: (addons-421083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:48:00.901428   17401 main.go:141] libmachine: (addons-421083) DBG | About to run SSH command:
	I1009 18:48:00.901443   17401 main.go:141] libmachine: (addons-421083) DBG | exit 0
	I1009 18:48:01.030919   17401 main.go:141] libmachine: (addons-421083) DBG | SSH cmd err, output: <nil>: 
	I1009 18:48:01.031169   17401 main.go:141] libmachine: (addons-421083) KVM machine creation complete!
	I1009 18:48:01.031492   17401 main.go:141] libmachine: (addons-421083) Calling .GetConfigRaw
	I1009 18:48:01.031988   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:01.032145   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:01.032303   17401 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 18:48:01.032314   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:01.033441   17401 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 18:48:01.033456   17401 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 18:48:01.033465   17401 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 18:48:01.033473   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.035611   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.035965   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.035991   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.036065   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.036213   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.036366   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.036510   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.036666   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.036832   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.036843   17401 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 18:48:01.142336   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:48:01.142357   17401 main.go:141] libmachine: Detecting the provisioner...
	I1009 18:48:01.142365   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.144998   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.145322   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.145351   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.145498   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.145669   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.145835   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.145975   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.146132   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.146288   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.146299   17401 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 18:48:01.251406   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 18:48:01.251456   17401 main.go:141] libmachine: found compatible host: buildroot
	I1009 18:48:01.251461   17401 main.go:141] libmachine: Provisioning with buildroot...
	I1009 18:48:01.251475   17401 main.go:141] libmachine: (addons-421083) Calling .GetMachineName
	I1009 18:48:01.251684   17401 buildroot.go:166] provisioning hostname "addons-421083"
	I1009 18:48:01.251706   17401 main.go:141] libmachine: (addons-421083) Calling .GetMachineName
	I1009 18:48:01.251880   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.254199   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.254503   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.254531   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.254653   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.254818   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.254937   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.255078   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.255255   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.255467   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.255486   17401 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-421083 && echo "addons-421083" | sudo tee /etc/hostname
	I1009 18:48:01.373395   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-421083
	
	I1009 18:48:01.373425   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.375901   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.376289   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.376313   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.376475   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.376657   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.376789   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.376920   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.377083   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.377254   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.377277   17401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-421083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-421083/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-421083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:48:01.493914   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:48:01.493942   17401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 18:48:01.493991   17401 buildroot.go:174] setting up certificates
	I1009 18:48:01.494007   17401 provision.go:84] configureAuth start
	I1009 18:48:01.494018   17401 main.go:141] libmachine: (addons-421083) Calling .GetMachineName
	I1009 18:48:01.494259   17401 main.go:141] libmachine: (addons-421083) Calling .GetIP
	I1009 18:48:01.496681   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.497081   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.497104   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.497223   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.499886   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.500217   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.500245   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.500288   17401 provision.go:143] copyHostCerts
	I1009 18:48:01.500368   17401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 18:48:01.500494   17401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 18:48:01.500583   17401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 18:48:01.500630   17401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.addons-421083 san=[127.0.0.1 192.168.39.156 addons-421083 localhost minikube]
	I1009 18:48:01.803364   17401 provision.go:177] copyRemoteCerts
	I1009 18:48:01.803416   17401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:48:01.803437   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.805981   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.806295   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.806324   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.806464   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.806662   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.806810   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.806927   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:01.889553   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:48:01.913620   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:48:01.936905   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:48:01.960014   17401 provision.go:87] duration metric: took 465.99311ms to configureAuth
	I1009 18:48:01.960042   17401 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:48:01.960241   17401 config.go:182] Loaded profile config "addons-421083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:48:01.960317   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.963075   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.963419   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.963460   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.963601   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.963785   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.963939   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.964063   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.964206   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.964382   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.964401   17401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:48:02.190106   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:48:02.190144   17401 main.go:141] libmachine: Checking connection to Docker...
	I1009 18:48:02.190156   17401 main.go:141] libmachine: (addons-421083) Calling .GetURL
	I1009 18:48:02.191369   17401 main.go:141] libmachine: (addons-421083) DBG | Using libvirt version 6000000
	I1009 18:48:02.193485   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.193859   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.193887   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.194019   17401 main.go:141] libmachine: Docker is up and running!
	I1009 18:48:02.194034   17401 main.go:141] libmachine: Reticulating splines...
	I1009 18:48:02.194042   17401 client.go:171] duration metric: took 25.194285944s to LocalClient.Create
	I1009 18:48:02.194070   17401 start.go:167] duration metric: took 25.194353336s to libmachine.API.Create "addons-421083"
	I1009 18:48:02.194088   17401 start.go:293] postStartSetup for "addons-421083" (driver="kvm2")
	I1009 18:48:02.194103   17401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:48:02.194124   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.194340   17401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:48:02.194363   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:02.196373   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.196652   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.196672   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.196791   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:02.196930   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.197056   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:02.197157   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:02.277130   17401 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:48:02.281365   17401 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 18:48:02.281391   17401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 18:48:02.281474   17401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 18:48:02.281506   17401 start.go:296] duration metric: took 87.409181ms for postStartSetup
	I1009 18:48:02.281540   17401 main.go:141] libmachine: (addons-421083) Calling .GetConfigRaw
	I1009 18:48:02.282055   17401 main.go:141] libmachine: (addons-421083) Calling .GetIP
	I1009 18:48:02.284406   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.284731   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.284757   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.284934   17401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/config.json ...
	I1009 18:48:02.285120   17401 start.go:128] duration metric: took 25.302528351s to createHost
	I1009 18:48:02.285140   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:02.287015   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.287341   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.287367   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.287516   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:02.287680   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.287802   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.287910   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:02.288034   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:02.288218   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:02.288231   17401 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:48:02.395749   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728499682.371890623
	
	I1009 18:48:02.395770   17401 fix.go:216] guest clock: 1728499682.371890623
	I1009 18:48:02.395777   17401 fix.go:229] Guest: 2024-10-09 18:48:02.371890623 +0000 UTC Remote: 2024-10-09 18:48:02.285131602 +0000 UTC m=+25.400487636 (delta=86.759021ms)
	I1009 18:48:02.395800   17401 fix.go:200] guest clock delta is within tolerance: 86.759021ms
	I1009 18:48:02.395807   17401 start.go:83] releasing machines lock for "addons-421083", held for 25.413289434s
	I1009 18:48:02.395835   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.396064   17401 main.go:141] libmachine: (addons-421083) Calling .GetIP
	I1009 18:48:02.398584   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.398954   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.398990   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.399113   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.399660   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.399829   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.399913   17401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:48:02.399967   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:02.399968   17401 ssh_runner.go:195] Run: cat /version.json
	I1009 18:48:02.400017   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:02.402492   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.402673   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.402814   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.402842   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.402956   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.402967   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:02.402980   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.403146   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.403198   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:02.403318   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:02.403376   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.403450   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:02.403839   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:02.403945   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:02.480633   17401 ssh_runner.go:195] Run: systemctl --version
	I1009 18:48:02.509100   17401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:48:02.669262   17401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:48:02.674791   17401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:48:02.674854   17401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:48:02.692275   17401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:48:02.692297   17401 start.go:495] detecting cgroup driver to use...
	I1009 18:48:02.692357   17401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:48:02.708890   17401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:48:02.722433   17401 docker.go:217] disabling cri-docker service (if available) ...
	I1009 18:48:02.722490   17401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:48:02.735669   17401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:48:02.748859   17401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:48:02.866868   17401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:48:03.031024   17401 docker.go:233] disabling docker service ...
	I1009 18:48:03.031122   17401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:48:03.046146   17401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:48:03.059418   17401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:48:03.167969   17401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:48:03.282724   17401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:48:03.296703   17401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:48:03.314454   17401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 18:48:03.314523   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.324913   17401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:48:03.324969   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.335108   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.345321   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.355784   17401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:48:03.366770   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.377216   17401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.393604   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.403613   17401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:48:03.412803   17401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:48:03.412852   17401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:48:03.427090   17401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:48:03.437364   17401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:48:03.549313   17401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:48:03.645167   17401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:48:03.645276   17401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:48:03.649832   17401 start.go:563] Will wait 60s for crictl version
	I1009 18:48:03.649895   17401 ssh_runner.go:195] Run: which crictl
	I1009 18:48:03.653543   17401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:48:03.696440   17401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:48:03.696572   17401 ssh_runner.go:195] Run: crio --version
	I1009 18:48:03.723729   17401 ssh_runner.go:195] Run: crio --version
	I1009 18:48:03.753445   17401 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 18:48:03.754568   17401 main.go:141] libmachine: (addons-421083) Calling .GetIP
	I1009 18:48:03.757062   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:03.757375   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:03.757402   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:03.757605   17401 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 18:48:03.761539   17401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:48:03.773458   17401 kubeadm.go:883] updating cluster {Name:addons-421083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:48:03.773582   17401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:48:03.773640   17401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:48:03.804576   17401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 18:48:03.804637   17401 ssh_runner.go:195] Run: which lz4
	I1009 18:48:03.808884   17401 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 18:48:03.813214   17401 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 18:48:03.813241   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 18:48:05.084237   17401 crio.go:462] duration metric: took 1.275371492s to copy over tarball
	I1009 18:48:05.084338   17401 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 18:48:07.168124   17401 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.083721492s)
	I1009 18:48:07.168152   17401 crio.go:469] duration metric: took 2.083874293s to extract the tarball
	I1009 18:48:07.168162   17401 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 18:48:07.204594   17401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:48:07.245226   17401 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:48:07.245247   17401 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:48:07.245256   17401 kubeadm.go:934] updating node { 192.168.39.156 8443 v1.31.1 crio true true} ...
	I1009 18:48:07.245376   17401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-421083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:48:07.245454   17401 ssh_runner.go:195] Run: crio config
	I1009 18:48:07.290260   17401 cni.go:84] Creating CNI manager for ""
	I1009 18:48:07.290286   17401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:48:07.290322   17401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 18:48:07.290344   17401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-421083 NodeName:addons-421083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:48:07.290463   17401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-421083"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:48:07.290524   17401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 18:48:07.300488   17401 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:48:07.300579   17401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:48:07.309650   17401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1009 18:48:07.325786   17401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:48:07.342140   17401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1009 18:48:07.358622   17401 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I1009 18:48:07.362600   17401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:48:07.374477   17401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:48:07.485056   17401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:48:07.502430   17401 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083 for IP: 192.168.39.156
	I1009 18:48:07.502456   17401 certs.go:194] generating shared ca certs ...
	I1009 18:48:07.502478   17401 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.502634   17401 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 18:48:07.613829   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt ...
	I1009 18:48:07.613862   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt: {Name:mkd74ce774b5650363e1df082fa10c8cece0b7f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.614055   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key ...
	I1009 18:48:07.614070   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key: {Name:mk4789884a13b38a73e51d5c1c8759c998d7f013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.614186   17401 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 18:48:07.800680   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt ...
	I1009 18:48:07.800711   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt: {Name:mkb557c5d244639ebef20bbe3aff9ae718550707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.800879   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key ...
	I1009 18:48:07.800889   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key: {Name:mk5ec2b0aefcc430750ca0126384175e68dc86da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.800958   17401 certs.go:256] generating profile certs ...
	I1009 18:48:07.801011   17401 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.key
	I1009 18:48:07.801031   17401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt with IP's: []
	I1009 18:48:08.067278   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt ...
	I1009 18:48:08.067305   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: {Name:mk59146854d725388c4dd57b83785f3c38be0fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.067456   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.key ...
	I1009 18:48:08.067465   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.key: {Name:mkfe4cce716a96d331355a3d3fdeccb1cddc5ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.067534   17401 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key.499d8342
	I1009 18:48:08.067551   17401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt.499d8342 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.156]
	I1009 18:48:08.178724   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt.499d8342 ...
	I1009 18:48:08.178750   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt.499d8342: {Name:mkc5352535e88481616dd4eefcb57376b1e04b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.178894   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key.499d8342 ...
	I1009 18:48:08.178905   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key.499d8342: {Name:mk99ad422c16af24903e5c16277883291bc9af71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.178972   17401 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt.499d8342 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt
	I1009 18:48:08.179039   17401 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key.499d8342 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key
	I1009 18:48:08.179120   17401 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.key
	I1009 18:48:08.179144   17401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.crt with IP's: []
	I1009 18:48:08.356797   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.crt ...
	I1009 18:48:08.356832   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.crt: {Name:mk9c7e610bc33161325374a91664eaebd6756667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.357010   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.key ...
	I1009 18:48:08.357023   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.key: {Name:mkd9569ac90f623608f9055d0e9e2641756234a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.357213   17401 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:48:08.357249   17401 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:48:08.357281   17401 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:48:08.357313   17401 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 18:48:08.357905   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:48:08.385490   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:48:08.408937   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:48:08.431878   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 18:48:08.458917   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:48:08.483605   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:48:08.510051   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:48:08.534864   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:48:08.559913   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:48:08.585173   17401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:48:08.603075   17401 ssh_runner.go:195] Run: openssl version
	I1009 18:48:08.609004   17401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:48:08.619670   17401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:48:08.624351   17401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:48:08.624400   17401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:48:08.630368   17401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:48:08.641154   17401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:48:08.645302   17401 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:48:08.645359   17401 kubeadm.go:392] StartCluster: {Name:addons-421083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:48:08.645451   17401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:48:08.645504   17401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:48:08.682129   17401 cri.go:89] found id: ""
	I1009 18:48:08.682207   17401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:48:08.692654   17401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:48:08.704954   17401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:08.718382   17401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:08.718413   17401 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:08.718468   17401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:08.728030   17401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:08.728096   17401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:08.738202   17401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:08.747870   17401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:08.747937   17401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:08.758405   17401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:08.767746   17401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:08.767815   17401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:08.777291   17401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:08.786050   17401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:08.786104   17401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:08.795246   17401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 18:48:08.844215   17401 kubeadm.go:310] W1009 18:48:08.827413     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:48:08.845699   17401 kubeadm.go:310] W1009 18:48:08.828967     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:48:08.950491   17401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:19.199053   17401 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 18:48:19.199172   17401 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 18:48:19.199289   17401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:19.199432   17401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:19.199571   17401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:19.199666   17401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:19.201344   17401 out.go:235]   - Generating certificates and keys ...
	I1009 18:48:19.201446   17401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 18:48:19.201520   17401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:19.201608   17401 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:19.201669   17401 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:19.201751   17401 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:19.201802   17401 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:19.201848   17401 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:19.202008   17401 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-421083 localhost] and IPs [192.168.39.156 127.0.0.1 ::1]
	I1009 18:48:19.202073   17401 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:19.202231   17401 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-421083 localhost] and IPs [192.168.39.156 127.0.0.1 ::1]
	I1009 18:48:19.202314   17401 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:19.202368   17401 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:19.202408   17401 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 18:48:19.202461   17401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:19.202520   17401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:19.202576   17401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:19.202642   17401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:19.202732   17401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:19.202808   17401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:19.202917   17401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:19.203006   17401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:19.204545   17401 out.go:235]   - Booting up control plane ...
	I1009 18:48:19.204657   17401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:19.204757   17401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:19.204849   17401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:19.204997   17401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:19.205141   17401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:19.205204   17401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 18:48:19.205375   17401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:19.205501   17401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:19.205556   17401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001164201s
	I1009 18:48:19.205634   17401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 18:48:19.205700   17401 kubeadm.go:310] [api-check] The API server is healthy after 4.502485036s
	I1009 18:48:19.205799   17401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:48:19.205933   17401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:48:19.206020   17401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:48:19.206378   17401 kubeadm.go:310] [mark-control-plane] Marking the node addons-421083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:48:19.206514   17401 kubeadm.go:310] [bootstrap-token] Using token: g5juxz.ri7598v7sv8u8xm3
	I1009 18:48:19.207850   17401 out.go:235]   - Configuring RBAC rules ...
	I1009 18:48:19.207953   17401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:48:19.208025   17401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:48:19.208143   17401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:48:19.208271   17401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:48:19.208371   17401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:48:19.208445   17401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:48:19.208562   17401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:48:19.208619   17401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 18:48:19.208667   17401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 18:48:19.208673   17401 kubeadm.go:310] 
	I1009 18:48:19.208725   17401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 18:48:19.208730   17401 kubeadm.go:310] 
	I1009 18:48:19.208811   17401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 18:48:19.208820   17401 kubeadm.go:310] 
	I1009 18:48:19.208848   17401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 18:48:19.208899   17401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:48:19.208942   17401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:48:19.208948   17401 kubeadm.go:310] 
	I1009 18:48:19.209010   17401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 18:48:19.209019   17401 kubeadm.go:310] 
	I1009 18:48:19.209070   17401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:48:19.209077   17401 kubeadm.go:310] 
	I1009 18:48:19.209123   17401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 18:48:19.209213   17401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:48:19.209301   17401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:48:19.209309   17401 kubeadm.go:310] 
	I1009 18:48:19.209377   17401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:48:19.209444   17401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 18:48:19.209450   17401 kubeadm.go:310] 
	I1009 18:48:19.209537   17401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g5juxz.ri7598v7sv8u8xm3 \
	I1009 18:48:19.209638   17401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 18:48:19.209661   17401 kubeadm.go:310] 	--control-plane 
	I1009 18:48:19.209668   17401 kubeadm.go:310] 
	I1009 18:48:19.209787   17401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:48:19.209796   17401 kubeadm.go:310] 
	I1009 18:48:19.209876   17401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g5juxz.ri7598v7sv8u8xm3 \
	I1009 18:48:19.209977   17401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 18:48:19.209989   17401 cni.go:84] Creating CNI manager for ""
	I1009 18:48:19.209998   17401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:48:19.211426   17401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 18:48:19.212648   17401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 18:48:19.223680   17401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 18:48:19.242617   17401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:48:19.242731   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:19.242762   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-421083 minikube.k8s.io/updated_at=2024_10_09T18_48_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=addons-421083 minikube.k8s.io/primary=true
	I1009 18:48:19.353738   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:19.353738   17401 ops.go:34] apiserver oom_adj: -16
	I1009 18:48:19.854521   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:20.353977   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:20.854545   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:21.354805   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:21.854008   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:22.354336   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:22.854597   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:23.354619   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:23.854652   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:23.983308   17401 kubeadm.go:1113] duration metric: took 4.740633863s to wait for elevateKubeSystemPrivileges
	I1009 18:48:23.983345   17401 kubeadm.go:394] duration metric: took 15.337989506s to StartCluster
	I1009 18:48:23.983369   17401 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:23.983500   17401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 18:48:23.983994   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:23.984233   17401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:48:23.984259   17401 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:48:23.984323   17401 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:48:23.984478   17401 addons.go:69] Setting yakd=true in profile "addons-421083"
	I1009 18:48:23.984491   17401 config.go:182] Loaded profile config "addons-421083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:48:23.984496   17401 addons.go:69] Setting inspektor-gadget=true in profile "addons-421083"
	I1009 18:48:23.984545   17401 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-421083"
	I1009 18:48:23.984554   17401 addons.go:69] Setting ingress-dns=true in profile "addons-421083"
	I1009 18:48:23.984561   17401 addons.go:234] Setting addon inspektor-gadget=true in "addons-421083"
	I1009 18:48:23.984570   17401 addons.go:234] Setting addon ingress-dns=true in "addons-421083"
	I1009 18:48:23.984572   17401 addons.go:69] Setting registry=true in profile "addons-421083"
	I1009 18:48:23.984580   17401 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-421083"
	I1009 18:48:23.984563   17401 addons.go:69] Setting cloud-spanner=true in profile "addons-421083"
	I1009 18:48:23.984587   17401 addons.go:234] Setting addon registry=true in "addons-421083"
	I1009 18:48:23.984595   17401 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-421083"
	I1009 18:48:23.984602   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984607   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984608   17401 addons.go:234] Setting addon cloud-spanner=true in "addons-421083"
	I1009 18:48:23.984618   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984629   17401 addons.go:69] Setting metrics-server=true in profile "addons-421083"
	I1009 18:48:23.984633   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984634   17401 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-421083"
	I1009 18:48:23.984640   17401 addons.go:234] Setting addon metrics-server=true in "addons-421083"
	I1009 18:48:23.984657   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984660   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984505   17401 addons.go:234] Setting addon yakd=true in "addons-421083"
	I1009 18:48:23.985067   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984552   17401 addons.go:69] Setting storage-provisioner=true in profile "addons-421083"
	I1009 18:48:23.985076   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985080   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985093   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.984619   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.985109   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985120   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.985124   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985141   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.985145   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.984515   17401 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-421083"
	I1009 18:48:23.985202   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.985221   17401 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-421083"
	I1009 18:48:23.984533   17401 addons.go:69] Setting volumesnapshots=true in profile "addons-421083"
	I1009 18:48:23.985717   17401 addons.go:234] Setting addon volumesnapshots=true in "addons-421083"
	I1009 18:48:23.985746   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.985841   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985874   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.986018   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985113   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.984517   17401 addons.go:69] Setting default-storageclass=true in profile "addons-421083"
	I1009 18:48:23.986192   17401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-421083"
	I1009 18:48:23.986700   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.986748   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.984544   17401 addons.go:69] Setting gcp-auth=true in profile "addons-421083"
	I1009 18:48:23.987027   17401 mustload.go:65] Loading cluster: addons-421083
	I1009 18:48:23.987285   17401 config.go:182] Loaded profile config "addons-421083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:48:23.987753   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.987807   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.989642   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.989687   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.992145   17401 out.go:177] * Verifying Kubernetes components...
	I1009 18:48:23.984524   17401 addons.go:69] Setting volcano=true in profile "addons-421083"
	I1009 18:48:23.992646   17401 addons.go:234] Setting addon volcano=true in "addons-421083"
	I1009 18:48:23.992678   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984539   17401 addons.go:69] Setting ingress=true in profile "addons-421083"
	I1009 18:48:23.993968   17401 addons.go:234] Setting addon ingress=true in "addons-421083"
	I1009 18:48:23.994006   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.986041   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.985086   17401 addons.go:234] Setting addon storage-provisioner=true in "addons-421083"
	I1009 18:48:23.997102   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.997395   17401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:48:23.997852   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.997894   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.004919   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I1009 18:48:24.005223   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I1009 18:48:24.005452   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.005654   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.005752   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.011273   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1009 18:48:24.011418   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.011467   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.011611   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.012001   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.012356   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.012381   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.012850   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.014603   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42353
	I1009 18:48:24.023502   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1009 18:48:24.023530   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I1009 18:48:24.024069   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024079   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024110   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024114   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024175   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024213   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024498   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024504   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024542   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024556   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024727   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I1009 18:48:24.024999   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.025075   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.025117   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.025235   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.025931   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.026303   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.026727   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.026753   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.027186   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.027234   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.027336   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.027264   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.027690   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.027847   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.028057   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.028435   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.028456   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.028767   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.029591   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.029637   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.029825   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.029855   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.039874   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.039978   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.040084   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.040128   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.059856   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I1009 18:48:24.060001   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I1009 18:48:24.060199   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.060611   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.060640   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.061016   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.061024   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.061497   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I1009 18:48:24.061552   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.061568   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.061894   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.061957   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.062420   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.062436   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.062530   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.062548   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.062966   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.063014   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.065373   17401 addons.go:234] Setting addon default-storageclass=true in "addons-421083"
	I1009 18:48:24.065409   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:24.065781   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.065812   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.065989   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
	I1009 18:48:24.066043   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I1009 18:48:24.066107   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I1009 18:48:24.066161   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:24.066187   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.066229   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.066492   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.066509   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.066546   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.066843   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.066875   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.068515   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.068590   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.068643   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.068789   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.068802   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.069453   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.069472   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.069547   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.069896   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.069911   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.070349   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.070376   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.070576   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.070666   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I1009 18:48:24.070854   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.070983   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.071808   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.071826   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.072250   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.072886   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.072922   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.072986   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.073066   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.073105   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I1009 18:48:24.073181   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.074902   17401 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1009 18:48:24.075762   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I1009 18:48:24.075872   17401 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-421083"
	I1009 18:48:24.075912   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:24.076275   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.076317   17401 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1009 18:48:24.076740   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36325
	I1009 18:48:24.076320   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.076546   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.077115   17401 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:48:24.077132   17401 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:48:24.077151   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.077879   17401 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:48:24.077895   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1009 18:48:24.077912   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.078212   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I1009 18:48:24.078650   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.078759   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.078766   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.079198   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.079214   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.079599   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.079780   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.080571   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.080759   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.082079   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.083174   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.083498   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.083736   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.083756   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.084054   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.084068   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.084121   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.084170   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.084183   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.084325   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.084623   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.084700   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.084787   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.085015   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.085188   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.085528   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.085776   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.087849   17401 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1009 18:48:24.089181   17401 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:48:24.089198   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:48:24.089215   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.090477   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.090630   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.092042   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.092104   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.092611   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.092681   17401 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1009 18:48:24.092979   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.093000   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.093150   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.093337   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.093545   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.093679   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.094458   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38339
	I1009 18:48:24.094564   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I1009 18:48:24.095018   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.095360   17401 out.go:177]   - Using image docker.io/registry:2.8.3
	I1009 18:48:24.095556   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.095627   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.095643   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.095644   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I1009 18:48:24.096196   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.096394   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.096616   17401 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:48:24.096652   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:48:24.096671   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.098424   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I1009 18:48:24.098714   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.100308   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.100400   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.100420   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.100437   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.100484   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.100500   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.100571   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.100582   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.100626   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.100632   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.100696   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.101488   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.101511   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.101575   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.101658   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.101698   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.101709   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.101836   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.101886   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.101928   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.102171   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.102642   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.102659   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.102804   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.102838   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.103750   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.104669   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.104702   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.105664   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.107515   17401 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1009 18:48:24.109023   17401 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:48:24.109042   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:48:24.109061   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.109434   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I1009 18:48:24.109886   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.110385   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.110407   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.110853   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.111047   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.112498   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.112715   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.113091   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.113118   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.113290   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.113485   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.114426   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 18:48:24.115502   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:48:24.115516   17401 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:48:24.115542   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.115608   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I1009 18:48:24.116092   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1009 18:48:24.116565   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.116738   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.116860   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.117128   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.117287   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.117299   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.117792   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.117809   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.118511   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.118522   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.119100   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.119141   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.119384   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36947
	I1009 18:48:24.119500   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.119513   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.119550   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.119643   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.119679   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.119818   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.119872   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.119946   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.119960   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.120310   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.120332   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.120899   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.121036   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.121899   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.123845   17401 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1009 18:48:24.124514   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38171
	I1009 18:48:24.124645   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.125574   17401 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1009 18:48:24.125591   17401 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1009 18:48:24.125609   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.125671   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.126363   17401 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:48:24.127491   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:48:24.127507   17401 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:48:24.127528   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.130230   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.130487   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.130754   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.130771   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.130860   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.130874   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.131089   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.131148   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.131230   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.131270   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.131379   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.131417   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.131533   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.131901   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.131914   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.132114   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.132616   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.132817   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.134345   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.135128   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42793
	I1009 18:48:24.135279   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I1009 18:48:24.135650   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.136119   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.136143   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.136362   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:48:24.136526   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.136694   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.136968   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.137087   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1009 18:48:24.137546   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.138002   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.138026   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.138219   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.138353   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.138370   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.138451   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.138731   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.138730   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.138937   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.139027   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:48:24.139822   17401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:48:24.140445   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.140666   17401 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:48:24.140684   17401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:48:24.140700   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.140799   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.141814   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:24.141838   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:24.141876   17401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:48:24.141888   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:48:24.141899   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.142249   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:24.142263   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:24.142275   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:24.142283   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:24.142290   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:24.142469   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:24.142480   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	W1009 18:48:24.142544   17401 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:48:24.142571   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:48:24.143972   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:48:24.144192   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.144607   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.144635   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.144764   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.144927   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.145105   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.145259   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.145933   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I1009 18:48:24.146298   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:48:24.146512   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.146612   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.146934   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.146952   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.146999   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.147018   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.147251   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.147360   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.147399   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.147535   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.147541   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.147699   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.148809   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:48:24.149343   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.149665   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1009 18:48:24.150040   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.150550   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.150573   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.150822   17401 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:48:24.151119   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.151442   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.151723   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:48:24.152913   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.153271   17401 out.go:177]   - Using image docker.io/busybox:stable
	I1009 18:48:24.154416   17401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1009 18:48:24.154457   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:48:24.154637   17401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:48:24.154660   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:48:24.154676   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.155982   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:48:24.156000   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:48:24.156020   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.157252   17401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:48:24.157792   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.158352   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.158380   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.158525   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.158720   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.158863   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.158994   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.159539   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.159689   17401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:48:24.160157   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.160180   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.160446   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.160589   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.160753   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.160902   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.161216   17401 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:48:24.161232   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:48:24.161243   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.164236   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.164663   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.164682   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.164796   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.164927   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.165067   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.165173   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	W1009 18:48:24.171028   17401 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56188->192.168.39.156:22: read: connection reset by peer
	I1009 18:48:24.171058   17401 retry.go:31] will retry after 200.986757ms: ssh: handshake failed: read tcp 192.168.39.1:56188->192.168.39.156:22: read: connection reset by peer
	I1009 18:48:24.387734   17401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:48:24.387747   17401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:48:24.456892   17401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:48:24.456922   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:48:24.496734   17401 node_ready.go:35] waiting up to 6m0s for node "addons-421083" to be "Ready" ...
	I1009 18:48:24.499176   17401 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1009 18:48:24.499201   17401 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1009 18:48:24.499806   17401 node_ready.go:49] node "addons-421083" has status "Ready":"True"
	I1009 18:48:24.499825   17401 node_ready.go:38] duration metric: took 3.05637ms for node "addons-421083" to be "Ready" ...
	I1009 18:48:24.499833   17401 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:48:24.511085   17401 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:24.572143   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:48:24.646077   17401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:48:24.646105   17401 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:48:24.648276   17401 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:48:24.648296   17401 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:48:24.681555   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:48:24.697573   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:48:24.698686   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:48:24.698708   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:48:24.699608   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:48:24.732308   17401 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:48:24.732334   17401 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:48:24.734560   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:48:24.744474   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:48:24.755661   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:48:24.755682   17401 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:48:24.772242   17401 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1009 18:48:24.772272   17401 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1009 18:48:24.809704   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:48:24.836466   17401 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:48:24.836494   17401 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:48:24.859641   17401 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:48:24.859659   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:48:24.871605   17401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:48:24.871638   17401 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:48:24.959270   17401 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1009 18:48:24.959295   17401 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1009 18:48:24.960292   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:48:24.960313   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:48:24.972785   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:48:24.972808   17401 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:48:24.993079   17401 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:48:24.993103   17401 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:48:25.087596   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:48:25.116437   17401 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1009 18:48:25.116462   17401 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1009 18:48:25.158627   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:48:25.238128   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:48:25.238157   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:48:25.258350   17401 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1009 18:48:25.258373   17401 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1009 18:48:25.262133   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:48:25.262158   17401 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:48:25.272123   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:48:25.272145   17401 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:48:25.453991   17401 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:48:25.454012   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:48:25.517612   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:48:25.517639   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:48:25.527782   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:48:25.527803   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:48:25.595289   17401 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1009 18:48:25.595317   17401 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1009 18:48:25.742797   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:48:25.786098   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:48:25.786124   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:48:25.851056   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:48:25.919100   17401 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:48:25.919127   17401 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1009 18:48:26.145587   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:48:26.145610   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:48:26.226737   17401 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.838961385s)
	I1009 18:48:26.226765   17401 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1009 18:48:26.330221   17401 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:48:26.330240   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1009 18:48:26.452987   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:48:26.453015   17401 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:48:26.519131   17401 pod_ready.go:103] pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:26.580284   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:48:26.718762   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:48:26.718783   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:48:26.739031   17401 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-421083" context rescaled to 1 replicas
	I1009 18:48:27.035235   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:48:27.035257   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:48:27.327859   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:48:27.327886   17401 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:48:27.700854   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:48:28.554767   17401 pod_ready.go:103] pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:28.623975   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.051801004s)
	I1009 18:48:28.624031   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:28.624043   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:28.624429   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:28.624458   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:28.624469   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:28.624477   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:28.624433   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:28.624743   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:28.624793   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:28.624802   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015110   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.333520889s)
	I1009 18:48:29.015154   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015166   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015202   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.317598584s)
	I1009 18:48:29.015244   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015259   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015267   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.315639322s)
	I1009 18:48:29.015289   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015296   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015633   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015640   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.015657   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.015658   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015642   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.015666   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015664   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015673   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.015675   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015682   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015688   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015878   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015903   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.015910   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.015948   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015972   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.015981   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.016044   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.016062   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.016078   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.016085   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.017435   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.017461   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.017478   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.093136   17401 pod_ready.go:93] pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:29.093159   17401 pod_ready.go:82] duration metric: took 4.582043559s for pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:29.093169   17401 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:29.205899   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.205916   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.206171   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.206219   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.206256   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:31.113763   17401 pod_ready.go:103] pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:31.128056   17401 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:48:31.128090   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:31.131526   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:31.132070   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:31.132099   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:31.132301   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:31.132488   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:31.132642   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:31.132775   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:31.433634   17401 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:48:31.495545   17401 addons.go:234] Setting addon gcp-auth=true in "addons-421083"
	I1009 18:48:31.495634   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:31.496075   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:31.496124   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:31.511322   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43883
	I1009 18:48:31.511734   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:31.512242   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:31.512266   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:31.512597   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:31.513067   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:31.513091   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:31.527440   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I1009 18:48:31.527916   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:31.528406   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:31.528431   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:31.528722   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:31.528953   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:31.530508   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:31.530711   17401 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:48:31.530735   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:31.534086   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:31.534511   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:31.534541   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:31.534729   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:31.534890   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:31.535076   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:31.535258   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:32.025573   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.290977216s)
	I1009 18:48:32.025637   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025651   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025647   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.281132825s)
	I1009 18:48:32.025687   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025705   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025722   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.938101476s)
	I1009 18:48:32.025692   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.215959598s)
	I1009 18:48:32.025773   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025787   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025821   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.867163386s)
	I1009 18:48:32.025749   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025837   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025842   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025853   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025952   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.283123896s)
	I1009 18:48:32.025986   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.025999   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026008   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026015   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026022   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.174931301s)
	I1009 18:48:32.026040   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026050   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	W1009 18:48:32.025985   17401 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:48:32.026078   17401 retry.go:31] will retry after 291.827465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:48:32.026153   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.44584113s)
	I1009 18:48:32.026169   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026178   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026184   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026187   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026194   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026195   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026199   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026156   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026214   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026223   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026232   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026239   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026200   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026272   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026282   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026289   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026202   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026313   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026401   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026423   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026431   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026438   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026439   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026443   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026465   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026471   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026478   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026483   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026861   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026889   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026895   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026906   17401 addons.go:475] Verifying addon metrics-server=true in "addons-421083"
	I1009 18:48:32.028309   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.028335   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.028342   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.028350   17401 addons.go:475] Verifying addon ingress=true in "addons-421083"
	I1009 18:48:32.028585   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.028593   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.028645   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.028655   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.028664   17401 addons.go:475] Verifying addon registry=true in "addons-421083"
	I1009 18:48:32.028669   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.028677   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.028748   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.028768   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.030391   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.030411   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.030419   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.030630   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.030657   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.030895   17401 out.go:177] * Verifying ingress addon...
	I1009 18:48:32.031989   17401 out.go:177] * Verifying registry addon...
	I1009 18:48:32.031990   17401 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-421083 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:48:32.033679   17401 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:48:32.038106   17401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:48:32.164704   17401 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:48:32.164726   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:32.164948   17401 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:48:32.164967   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:32.237713   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.237740   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.238051   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.238070   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.318541   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:48:32.667558   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:32.668179   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.048787   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.049940   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:33.116710   17401 pod_ready.go:103] pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:33.576325   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:33.576686   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.586812   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.885920713s)
	I1009 18:48:33.586864   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:33.586875   17401 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.056142547s)
	I1009 18:48:33.586882   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:33.587347   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:33.587380   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:33.587394   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:33.587400   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:33.587655   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:33.587694   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:33.587705   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:33.587715   17401 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-421083"
	I1009 18:48:33.589013   17401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:48:33.590036   17401 out.go:177] * Verifying csi-hostpath-driver addon...
	I1009 18:48:33.591999   17401 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1009 18:48:33.592724   17401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:48:33.593794   17401 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:48:33.593817   17401 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:48:33.640391   17401 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:48:33.640418   17401 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:48:33.643274   17401 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:48:33.643292   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:33.712677   17401 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:48:33.712708   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:48:33.798343   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:48:34.037807   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.044499   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:34.098941   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:34.313456   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.994855339s)
	I1009 18:48:34.313519   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:34.313540   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:34.313812   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:34.313849   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:34.313868   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:34.313881   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:34.313892   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:34.314177   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:34.314188   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:34.546371   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:34.546782   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.601912   17401 pod_ready.go:98] pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.156 HostIPs:[{IP:192.168.39
.156}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-09 18:48:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-09 18:48:28 +0000 UTC,FinishedAt:2024-10-09 18:48:33 +0000 UTC,ContainerID:cri-o://f5e123bf4f607312fa33f1a460d4653a659f49d9da94fd4c8208a1e961f47868,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f5e123bf4f607312fa33f1a460d4653a659f49d9da94fd4c8208a1e961f47868 Started:0xc001b13080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0016828a0} {Name:kube-api-access-2lggz MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0016828b0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1009 18:48:34.601936   17401 pod_ready.go:82] duration metric: took 5.508761994s for pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace to be "Ready" ...
	E1009 18:48:34.601946   17401 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.156 HostIPs:[{IP:192.168.39.156}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-09 18:48:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-09 18:48:28 +0000 UTC,FinishedAt:2024-10-09 18:48:33 +0000 UTC,ContainerID:cri-o://f5e123bf4f607312fa33f1a460d4653a659f49d9da94fd4c8208a1e961f47868,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f5e123bf4f607312fa33f1a460d4653a659f49d9da94fd4c8208a1e961f47868 Started:0xc001b13080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0016828a0} {Name:kube-api-access-2lggz MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0016828b0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1009 18:48:34.601956   17401 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.608414   17401 pod_ready.go:93] pod "etcd-addons-421083" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:34.608444   17401 pod_ready.go:82] duration metric: took 6.476297ms for pod "etcd-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.608458   17401 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.618045   17401 pod_ready.go:93] pod "kube-apiserver-addons-421083" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:34.618073   17401 pod_ready.go:82] duration metric: took 9.606049ms for pod "kube-apiserver-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.618085   17401 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.624712   17401 pod_ready.go:93] pod "kube-controller-manager-addons-421083" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:34.624739   17401 pod_ready.go:82] duration metric: took 6.645765ms for pod "kube-controller-manager-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.624750   17401 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98lbc" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.630896   17401 pod_ready.go:93] pod "kube-proxy-98lbc" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:34.630920   17401 pod_ready.go:82] duration metric: took 6.162418ms for pod "kube-proxy-98lbc" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.630932   17401 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.646945   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.014805   17401 pod_ready.go:93] pod "kube-scheduler-addons-421083" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:35.014827   17401 pod_ready.go:82] duration metric: took 383.888267ms for pod "kube-scheduler-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:35.014836   17401 pod_ready.go:39] duration metric: took 10.514987687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:48:35.014851   17401 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:48:35.014896   17401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:48:35.049792   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.070107   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:35.131734   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.214085   17401 api_server.go:72] duration metric: took 11.229784943s to wait for apiserver process to appear ...
	I1009 18:48:35.214114   17401 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:48:35.214138   17401 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I1009 18:48:35.216474   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.418096274s)
	I1009 18:48:35.216510   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:35.216522   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:35.216824   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:35.216867   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:35.216878   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:35.216890   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:35.216898   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:35.217135   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:35.217148   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:35.217150   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:35.218148   17401 addons.go:475] Verifying addon gcp-auth=true in "addons-421083"
	I1009 18:48:35.219896   17401 out.go:177] * Verifying gcp-auth addon...
	I1009 18:48:35.222417   17401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:48:35.237229   17401 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I1009 18:48:35.243892   17401 api_server.go:141] control plane version: v1.31.1
	I1009 18:48:35.243917   17401 api_server.go:131] duration metric: took 29.797275ms to wait for apiserver health ...
	I1009 18:48:35.243926   17401 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:48:35.264219   17401 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:48:35.264245   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:35.286370   17401 system_pods.go:59] 18 kube-system pods found
	I1009 18:48:35.286409   17401 system_pods.go:61] "coredns-7c65d6cfc9-7nvgj" [b3ca0959-36fb-4d13-89c0-435f4fde16f8] Running
	I1009 18:48:35.286420   17401 system_pods.go:61] "coredns-7c65d6cfc9-fvwmm" [bad6872d-f55e-4622-b3ac-fb96784b9b65] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1009 18:48:35.286430   17401 system_pods.go:61] "csi-hostpath-attacher-0" [e2b2f817-c253-49b6-8345-271857327ef0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:48:35.286438   17401 system_pods.go:61] "csi-hostpath-resizer-0" [ddf25048-aab5-4cbc-bfec-8219363e5c69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:48:35.286449   17401 system_pods.go:61] "csi-hostpathplugin-m7lz5" [c05bb7d7-3592-48d1-85d1-b361a68e79aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:48:35.286455   17401 system_pods.go:61] "etcd-addons-421083" [d3c4522c-c7fb-4ad2-8000-383016f601e5] Running
	I1009 18:48:35.286460   17401 system_pods.go:61] "kube-apiserver-addons-421083" [6082264c-0805-4790-8796-9ce439e9b3b4] Running
	I1009 18:48:35.286466   17401 system_pods.go:61] "kube-controller-manager-addons-421083" [45ee9bad-9652-46b5-b70d-12cfd491365b] Running
	I1009 18:48:35.286479   17401 system_pods.go:61] "kube-ingress-dns-minikube" [1f1cb904-3c3e-4c50-b15f-385022869b8e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:48:35.286493   17401 system_pods.go:61] "kube-proxy-98lbc" [6a26ad94-5c33-40db-8a42-9e11d3523806] Running
	I1009 18:48:35.286502   17401 system_pods.go:61] "kube-scheduler-addons-421083" [81120780-6ded-4417-9df7-67be5fef6826] Running
	I1009 18:48:35.286510   17401 system_pods.go:61] "metrics-server-84c5f94fbc-4s5xq" [cd71806c-0308-466b-917f-085718fee448] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:48:35.286535   17401 system_pods.go:61] "nvidia-device-plugin-daemonset-4k6f6" [c45cd383-1866-4787-a24e-bac7c6eb0863] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:48:35.286547   17401 system_pods.go:61] "registry-66c9cd494c-f92jv" [98955600-7b10-44b3-ac78-eff396b2c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:48:35.286556   17401 system_pods.go:61] "registry-proxy-x986l" [f7e67133-eaf2-4276-8331-d8dd8cbf0c4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:48:35.286567   17401 system_pods.go:61] "snapshot-controller-56fcc65765-4dht5" [0953230e-3a9a-494e-97c6-faef913aa115] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:48:35.286577   17401 system_pods.go:61] "snapshot-controller-56fcc65765-lshkr" [735e0cc5-1a6f-41e8-adfa-beaaee6751d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:48:35.286582   17401 system_pods.go:61] "storage-provisioner" [c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba] Running
	I1009 18:48:35.286594   17401 system_pods.go:74] duration metric: took 42.661688ms to wait for pod list to return data ...
	I1009 18:48:35.286607   17401 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:48:35.403443   17401 default_sa.go:45] found service account: "default"
	I1009 18:48:35.403474   17401 default_sa.go:55] duration metric: took 116.856615ms for default service account to be created ...
	I1009 18:48:35.403496   17401 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:48:35.538414   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.541140   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:35.642478   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.645183   17401 system_pods.go:86] 17 kube-system pods found
	I1009 18:48:35.645214   17401 system_pods.go:89] "coredns-7c65d6cfc9-7nvgj" [b3ca0959-36fb-4d13-89c0-435f4fde16f8] Running
	I1009 18:48:35.645226   17401 system_pods.go:89] "csi-hostpath-attacher-0" [e2b2f817-c253-49b6-8345-271857327ef0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:48:35.645237   17401 system_pods.go:89] "csi-hostpath-resizer-0" [ddf25048-aab5-4cbc-bfec-8219363e5c69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:48:35.645252   17401 system_pods.go:89] "csi-hostpathplugin-m7lz5" [c05bb7d7-3592-48d1-85d1-b361a68e79aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:48:35.645258   17401 system_pods.go:89] "etcd-addons-421083" [d3c4522c-c7fb-4ad2-8000-383016f601e5] Running
	I1009 18:48:35.645265   17401 system_pods.go:89] "kube-apiserver-addons-421083" [6082264c-0805-4790-8796-9ce439e9b3b4] Running
	I1009 18:48:35.645272   17401 system_pods.go:89] "kube-controller-manager-addons-421083" [45ee9bad-9652-46b5-b70d-12cfd491365b] Running
	I1009 18:48:35.645280   17401 system_pods.go:89] "kube-ingress-dns-minikube" [1f1cb904-3c3e-4c50-b15f-385022869b8e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:48:35.645289   17401 system_pods.go:89] "kube-proxy-98lbc" [6a26ad94-5c33-40db-8a42-9e11d3523806] Running
	I1009 18:48:35.645296   17401 system_pods.go:89] "kube-scheduler-addons-421083" [81120780-6ded-4417-9df7-67be5fef6826] Running
	I1009 18:48:35.645307   17401 system_pods.go:89] "metrics-server-84c5f94fbc-4s5xq" [cd71806c-0308-466b-917f-085718fee448] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:48:35.645316   17401 system_pods.go:89] "nvidia-device-plugin-daemonset-4k6f6" [c45cd383-1866-4787-a24e-bac7c6eb0863] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:48:35.645327   17401 system_pods.go:89] "registry-66c9cd494c-f92jv" [98955600-7b10-44b3-ac78-eff396b2c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:48:35.645335   17401 system_pods.go:89] "registry-proxy-x986l" [f7e67133-eaf2-4276-8331-d8dd8cbf0c4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:48:35.645345   17401 system_pods.go:89] "snapshot-controller-56fcc65765-4dht5" [0953230e-3a9a-494e-97c6-faef913aa115] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:48:35.645357   17401 system_pods.go:89] "snapshot-controller-56fcc65765-lshkr" [735e0cc5-1a6f-41e8-adfa-beaaee6751d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:48:35.645362   17401 system_pods.go:89] "storage-provisioner" [c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba] Running
	I1009 18:48:35.645377   17401 system_pods.go:126] duration metric: took 241.871798ms to wait for k8s-apps to be running ...
	I1009 18:48:35.645389   17401 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:48:35.645446   17401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:35.688742   17401 system_svc.go:56] duration metric: took 43.344542ms WaitForService to wait for kubelet
	I1009 18:48:35.688773   17401 kubeadm.go:582] duration metric: took 11.704478846s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:48:35.688790   17401 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:48:35.725862   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:35.797044   17401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 18:48:35.797067   17401 node_conditions.go:123] node cpu capacity is 2
	I1009 18:48:35.797078   17401 node_conditions.go:105] duration metric: took 108.283571ms to run NodePressure ...
	I1009 18:48:35.797088   17401 start.go:241] waiting for startup goroutines ...
	I1009 18:48:36.038811   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.042255   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:36.140907   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:36.225891   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:36.538883   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.542927   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:36.598373   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:36.729417   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:37.038253   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.041966   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:37.098230   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:37.226290   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:37.538487   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.541789   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:37.598048   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:37.726481   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:38.038469   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.041485   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:38.097529   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:38.225898   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:38.538763   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.541697   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:38.598097   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:38.726422   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:39.337135   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:39.337500   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.338067   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:39.339560   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:39.537928   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.542262   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:39.598564   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:39.726527   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:40.038353   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.041308   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:40.097650   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:40.225337   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:40.539195   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.542143   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:40.597921   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:40.725924   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:41.037667   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.041864   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:41.097346   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:41.225322   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:41.538959   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.542081   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:41.597874   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:41.726272   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:42.038753   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.041623   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:42.097323   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:42.225272   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:42.538484   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.541056   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:42.640421   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:42.725859   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:43.037803   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.041140   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:43.098983   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:43.225799   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:43.541529   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.543436   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:43.598156   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:43.725688   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:44.038742   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.041150   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:44.097549   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:44.226769   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:44.538172   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.540777   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:44.598049   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:44.726623   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:45.038762   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:45.041603   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:45.097214   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:45.225398   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:45.539142   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:45.541703   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:45.597852   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:45.725572   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:46.039192   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:46.040922   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:46.097445   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:46.225818   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:46.537754   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:46.541770   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:46.597319   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:46.725759   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:47.039362   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:47.041249   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:47.096841   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:47.226206   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:47.538337   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:47.541511   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:47.597941   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:47.726483   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:48.038503   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:48.041274   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:48.097338   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:48.225594   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:48.538508   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:48.540981   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:48.597331   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:48.725468   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:49.038710   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:49.042163   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:49.097924   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:49.226245   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:49.538988   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:49.541412   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:49.596888   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:49.726081   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:50.038243   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:50.041195   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:50.096807   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:50.226394   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:50.539781   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:50.541529   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:50.597529   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:50.725593   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:51.038645   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:51.041226   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:51.097467   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:51.226045   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:51.538939   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:51.541528   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:51.597520   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:51.726761   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:52.038739   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:52.041293   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:52.097555   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:52.226733   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:52.537867   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:52.540618   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:52.596960   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:52.726613   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:53.042273   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:53.042936   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:53.106610   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:53.230069   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:53.538016   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:53.541325   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:53.597065   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:53.725839   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:54.039302   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:54.041204   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:54.097819   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:54.226623   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:54.539225   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:54.541388   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:54.597116   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:54.727827   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:55.037790   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:55.041871   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:55.097658   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:55.225684   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:55.538858   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:55.540591   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:55.597177   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:55.725299   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:56.038274   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:56.040941   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:56.097908   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:56.226257   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:56.538658   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:56.541533   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:56.597187   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:56.726312   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:57.037608   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:57.042062   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:57.097786   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:57.225923   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:57.537840   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:57.540791   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:57.597030   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:57.726663   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:58.038445   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:58.041005   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:58.097603   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:58.225893   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:58.672685   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:58.673025   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:58.673515   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:58.726061   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:59.038257   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:59.044075   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:59.097970   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:59.226072   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:59.537939   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:59.541434   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:59.596839   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:59.727174   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:00.037844   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:00.041204   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:00.097883   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:00.227188   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:00.539325   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:00.540601   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:00.645122   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:00.726394   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:01.038807   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:01.042322   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:01.096810   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:01.226195   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:01.538222   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:01.541108   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:01.603587   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:01.726736   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.038468   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:02.041992   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:02.098096   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:02.226277   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.538050   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:02.541156   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:02.597276   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:02.725792   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.038320   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:03.041308   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:03.097777   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:03.226962   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.538316   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:03.541384   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:03.597628   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:03.726083   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.038339   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:04.041045   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:04.097917   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:04.226725   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.538509   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:04.541594   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:04.597247   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:04.725836   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.037583   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:05.041567   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:05.098678   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:05.225424   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.538384   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:05.541817   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:05.597839   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:05.726567   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.038319   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:06.041125   17401 kapi.go:107] duration metric: took 34.00301838s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:49:06.097245   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:06.226659   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.537835   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:06.598142   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:06.728451   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.037908   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:07.097779   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:07.226748   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.760617   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.761555   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:07.762135   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.037684   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:08.097070   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.226609   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:08.537872   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:08.597484   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.726246   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.038347   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:09.097296   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:09.226643   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.538947   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:09.597750   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:09.726035   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:10.038995   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:10.098097   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.226857   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:10.538676   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:10.597792   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.728771   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.043083   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:11.096706   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:11.225783   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.537908   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:11.597348   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:11.726594   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.039426   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:12.141349   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:12.228146   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.538188   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:12.598030   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:12.731081   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.037781   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:13.097507   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:13.226661   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.538384   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:13.596423   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:13.731049   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.038432   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:14.097818   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:14.225544   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.539292   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:14.598499   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:14.726155   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.039595   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:15.097356   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:15.226379   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.538357   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:15.597197   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:15.726266   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.038027   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:16.097695   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:16.226390   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.538688   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:16.597448   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:16.726849   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.038927   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:17.097951   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:17.225970   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.537619   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:17.597505   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:17.727059   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.037632   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:18.096974   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:18.226160   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.537798   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:18.596961   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:18.938328   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.161578   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:19.161871   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:19.225677   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.537344   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:19.596724   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:19.726000   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.037681   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:20.096937   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:20.226861   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.539847   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:20.598454   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:20.726816   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.037955   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:21.139202   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:21.226423   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.538122   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:21.597670   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:21.726032   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:22.038466   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:22.097214   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.226115   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:22.538162   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:22.598528   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.726622   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.038990   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:23.108179   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:23.232788   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.537921   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:23.597656   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:23.725545   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:24.038901   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:24.098156   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:24.227084   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:24.541227   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:24.597895   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:24.726196   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.038507   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:25.104071   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:25.226381   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.538966   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:25.596676   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:25.726064   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.037709   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:26.097304   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:26.226609   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.538618   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:26.596921   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:26.726894   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.039216   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:27.097596   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:27.226548   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.618277   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:27.620145   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:27.726374   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.039951   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:28.140902   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:28.240174   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.538840   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:28.598884   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:28.729716   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.039453   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:29.097689   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:29.225624   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.538773   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:29.596838   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:29.726082   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.044193   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:30.098448   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:30.226246   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.547999   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:30.650630   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:30.725209   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.046620   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:31.097600   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:31.226499   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.539195   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:31.604095   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:31.732460   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.038256   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:32.097825   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:32.226454   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.538908   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:32.597558   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:32.726426   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.038241   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:33.139546   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:33.225816   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.538377   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:33.598428   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:33.727003   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.043262   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:34.145522   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:34.242625   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.538961   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:34.599129   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:34.726935   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.037743   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:35.097363   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:35.225550   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.539226   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:35.597486   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:35.726019   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.037830   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:36.097303   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:36.230689   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.538495   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:36.598440   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:36.728285   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.040053   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:37.140046   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:37.227270   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.538243   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:37.597895   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:37.726362   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:38.038857   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:38.097551   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:38.227580   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:38.539134   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:38.640058   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:38.726213   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:39.038974   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:39.099422   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:39.227643   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:39.538762   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:39.597799   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:39.726722   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:40.039788   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:40.099827   17401 kapi.go:107] duration metric: took 1m6.507096683s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:49:40.227951   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:40.538483   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:40.725624   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:41.039719   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:41.227109   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:41.540186   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:41.726512   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:42.038597   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:42.227385   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:42.538617   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:42.725595   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:43.188196   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:43.229131   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:43.538175   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:43.725721   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:44.038313   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:44.226209   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:44.539141   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:44.727854   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:45.038316   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:45.227568   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:45.538315   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:45.725967   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:46.038053   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:46.226688   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:46.539059   17401 kapi.go:107] duration metric: took 1m14.505379696s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:49:46.726049   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:47.226280   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:47.726031   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:48.230838   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:48.726211   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:49.226106   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:49.726519   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:50.226412   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:50.725990   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:51.226985   17401 kapi.go:107] duration metric: took 1m16.004566477s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:49:51.228652   17401 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-421083 cluster.
	I1009 18:49:51.229917   17401 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:49:51.231288   17401 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:49:51.232694   17401 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, nvidia-device-plugin, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1009 18:49:51.233922   17401 addons.go:510] duration metric: took 1m27.249612449s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server nvidia-device-plugin inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1009 18:49:51.233955   17401 start.go:246] waiting for cluster config update ...
	I1009 18:49:51.233974   17401 start.go:255] writing updated cluster config ...
	I1009 18:49:51.234198   17401 ssh_runner.go:195] Run: rm -f paused
	I1009 18:49:51.287148   17401 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 18:49:51.288903   17401 out.go:177] * Done! kubectl is now configured to use "addons-421083" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.900474730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500470900449858,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574194,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdd90787-5377-4f2b-a58f-ff036f342eca name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.901006418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a5fbb57-612f-4f7e-a93a-25dde9d3cf72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.901116797Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a5fbb57-612f-4f7e-a93a-25dde9d3cf72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.901436973Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ebcd866002d1f847e6e747abc203615a9cd4fc3501a88c8b196f115a18477d7,PodSandboxId:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728500433788762445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e10b0fae17697e64fc5c769d89e5e479042703e49a0ff5c9323f60f03be88d5,PodSandboxId:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728500332981793147,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b606f98cb5df56eb855ada87a088e46c15a93495f6ea7dc731ba6c6a77101db0,PodSandboxId:aa7b9e74a00b50c83fd20b3c5c57446ab69d97de5bcad830fa7220c300f67041,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728499785887195601,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-2g72b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 27748278-bcf8-483d-9f4e-928de56f1737,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:990a1f4cf4dbd8bc6595a5fec421f84d7d6cf4caa141944bacdfcf52a70ca602,PodSandboxId:16c06aedb895a6173db4068faf6516b519339f6a1a9fcc5f02ac5d45494dd731,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1728499765127539678,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-25wtz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f6b1bd8-b6bb-4e40-b069-1d58797e7522,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8a9103c44c657fa327b5dc00387dd6fc1d9d7f02bc429e51c838abfdc7aca49,PodSandboxId:151a5c04d8eb5679225a06bc58e6e63480caa2b21ed133130bd11128a2f8701a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728499764418653441,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4krpd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52536c40-3148-4827-9989-4a0b8eb2dd5a,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a4567afd2d60140e6acc64ed7cc0ae417fce099f3b4fb426808ba4455381d,PodSandboxId:1e69577e6e19708564f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728499751957608174,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db9ba58d897e41a65b784c784284206b8e22915749301ce56100225c2255953,PodSandboxId:927ce4283ae091dc8d9fbbbad12d782d5c56509a23d8e1778ec07ebb62677b0e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728499730778128098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1cb904-3c3e-4c50-b15f-385022869b8e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2,PodSandboxId:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744b
e9255,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728499710899992381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3,PodSandboxId:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728499707168164382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9,PodSandboxId:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728499704964242194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0,PodSandboxId:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728499693684933672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6902ff7c31989979513f93e0a1f83cd6f
56388007f443860c3784d8d7e7a139,PodSandboxId:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728499693700326668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b631e95bf64ef673c62c8365
fb17901cc9d3dc8731b798ba14c26c7e155d2d4b,PodSandboxId:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728499693678213932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3aab9ef167bb37ec2b65ba9ee7323586496af8d
304e6c2e546d54b8bfbe416,PodSandboxId:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728499693631591613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a5fbb57-612f-4f7e-a93a
-25dde9d3cf72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.944558798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0421500-1d4e-4a1e-9a14-e34ff8bf3213 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.944653490Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0421500-1d4e-4a1e-9a14-e34ff8bf3213 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.945825007Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a6ec858e-554c-497c-a226-691fdef8aa7b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.946987659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500470946961148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574194,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a6ec858e-554c-497c-a226-691fdef8aa7b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.947543858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf7834b7-8d5f-45ad-9201-53445522f940 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.947624869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf7834b7-8d5f-45ad-9201-53445522f940 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.948104474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ebcd866002d1f847e6e747abc203615a9cd4fc3501a88c8b196f115a18477d7,PodSandboxId:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728500433788762445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e10b0fae17697e64fc5c769d89e5e479042703e49a0ff5c9323f60f03be88d5,PodSandboxId:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728500332981793147,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b606f98cb5df56eb855ada87a088e46c15a93495f6ea7dc731ba6c6a77101db0,PodSandboxId:aa7b9e74a00b50c83fd20b3c5c57446ab69d97de5bcad830fa7220c300f67041,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728499785887195601,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-2g72b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 27748278-bcf8-483d-9f4e-928de56f1737,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:990a1f4cf4dbd8bc6595a5fec421f84d7d6cf4caa141944bacdfcf52a70ca602,PodSandboxId:16c06aedb895a6173db4068faf6516b519339f6a1a9fcc5f02ac5d45494dd731,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1728499765127539678,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-25wtz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f6b1bd8-b6bb-4e40-b069-1d58797e7522,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8a9103c44c657fa327b5dc00387dd6fc1d9d7f02bc429e51c838abfdc7aca49,PodSandboxId:151a5c04d8eb5679225a06bc58e6e63480caa2b21ed133130bd11128a2f8701a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728499764418653441,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4krpd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52536c40-3148-4827-9989-4a0b8eb2dd5a,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a4567afd2d60140e6acc64ed7cc0ae417fce099f3b4fb426808ba4455381d,PodSandboxId:1e69577e6e19708564f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728499751957608174,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db9ba58d897e41a65b784c784284206b8e22915749301ce56100225c2255953,PodSandboxId:927ce4283ae091dc8d9fbbbad12d782d5c56509a23d8e1778ec07ebb62677b0e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728499730778128098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1cb904-3c3e-4c50-b15f-385022869b8e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2,PodSandboxId:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744b
e9255,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728499710899992381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3,PodSandboxId:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728499707168164382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9,PodSandboxId:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728499704964242194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0,PodSandboxId:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728499693684933672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6902ff7c31989979513f93e0a1f83cd6f
56388007f443860c3784d8d7e7a139,PodSandboxId:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728499693700326668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b631e95bf64ef673c62c8365
fb17901cc9d3dc8731b798ba14c26c7e155d2d4b,PodSandboxId:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728499693678213932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3aab9ef167bb37ec2b65ba9ee7323586496af8d
304e6c2e546d54b8bfbe416,PodSandboxId:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728499693631591613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cf7834b7-8d5f-45ad-9201
-53445522f940 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.988241032Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e00eaab2-a7ef-4a83-89d2-0a639f58df7b name=/runtime.v1.RuntimeService/Version
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.988344171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e00eaab2-a7ef-4a83-89d2-0a639f58df7b name=/runtime.v1.RuntimeService/Version
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.989810978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c525616-b39e-44ad-99da-c97d3d30d90a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.991108339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500470991079357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574194,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c525616-b39e-44ad-99da-c97d3d30d90a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.991778067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fdc28df5-739a-41ec-bf60-f3ff6bdcbfec name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.991981349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fdc28df5-739a-41ec-bf60-f3ff6bdcbfec name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:10 addons-421083 crio[659]: time="2024-10-09 19:01:10.992315496Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ebcd866002d1f847e6e747abc203615a9cd4fc3501a88c8b196f115a18477d7,PodSandboxId:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728500433788762445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e10b0fae17697e64fc5c769d89e5e479042703e49a0ff5c9323f60f03be88d5,PodSandboxId:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728500332981793147,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b606f98cb5df56eb855ada87a088e46c15a93495f6ea7dc731ba6c6a77101db0,PodSandboxId:aa7b9e74a00b50c83fd20b3c5c57446ab69d97de5bcad830fa7220c300f67041,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728499785887195601,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-2g72b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 27748278-bcf8-483d-9f4e-928de56f1737,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:990a1f4cf4dbd8bc6595a5fec421f84d7d6cf4caa141944bacdfcf52a70ca602,PodSandboxId:16c06aedb895a6173db4068faf6516b519339f6a1a9fcc5f02ac5d45494dd731,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1728499765127539678,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-25wtz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f6b1bd8-b6bb-4e40-b069-1d58797e7522,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8a9103c44c657fa327b5dc00387dd6fc1d9d7f02bc429e51c838abfdc7aca49,PodSandboxId:151a5c04d8eb5679225a06bc58e6e63480caa2b21ed133130bd11128a2f8701a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728499764418653441,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4krpd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52536c40-3148-4827-9989-4a0b8eb2dd5a,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a4567afd2d60140e6acc64ed7cc0ae417fce099f3b4fb426808ba4455381d,PodSandboxId:1e69577e6e19708564f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728499751957608174,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db9ba58d897e41a65b784c784284206b8e22915749301ce56100225c2255953,PodSandboxId:927ce4283ae091dc8d9fbbbad12d782d5c56509a23d8e1778ec07ebb62677b0e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728499730778128098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1cb904-3c3e-4c50-b15f-385022869b8e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2,PodSandboxId:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744b
e9255,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728499710899992381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3,PodSandboxId:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728499707168164382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9,PodSandboxId:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728499704964242194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0,PodSandboxId:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728499693684933672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6902ff7c31989979513f93e0a1f83cd6f
56388007f443860c3784d8d7e7a139,PodSandboxId:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728499693700326668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b631e95bf64ef673c62c8365
fb17901cc9d3dc8731b798ba14c26c7e155d2d4b,PodSandboxId:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728499693678213932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3aab9ef167bb37ec2b65ba9ee7323586496af8d
304e6c2e546d54b8bfbe416,PodSandboxId:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728499693631591613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fdc28df5-739a-41ec-bf60
-f3ff6bdcbfec name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:11 addons-421083 crio[659]: time="2024-10-09 19:01:11.028142583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52eb7e92-fa21-415c-b54e-809ca1e678ed name=/runtime.v1.RuntimeService/Version
	Oct 09 19:01:11 addons-421083 crio[659]: time="2024-10-09 19:01:11.028235462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52eb7e92-fa21-415c-b54e-809ca1e678ed name=/runtime.v1.RuntimeService/Version
	Oct 09 19:01:11 addons-421083 crio[659]: time="2024-10-09 19:01:11.029743392Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=148631e5-8a09-40d5-b693-bf7ba66372e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:01:11 addons-421083 crio[659]: time="2024-10-09 19:01:11.031006357Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500471030979430,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574194,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=148631e5-8a09-40d5-b693-bf7ba66372e0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:01:11 addons-421083 crio[659]: time="2024-10-09 19:01:11.031623243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb8fb9b4-3229-4656-bd19-b24094b28d94 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:11 addons-421083 crio[659]: time="2024-10-09 19:01:11.031702149Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb8fb9b4-3229-4656-bd19-b24094b28d94 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:01:11 addons-421083 crio[659]: time="2024-10-09 19:01:11.032135720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4ebcd866002d1f847e6e747abc203615a9cd4fc3501a88c8b196f115a18477d7,PodSandboxId:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728500433788762445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e10b0fae17697e64fc5c769d89e5e479042703e49a0ff5c9323f60f03be88d5,PodSandboxId:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728500332981793147,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b606f98cb5df56eb855ada87a088e46c15a93495f6ea7dc731ba6c6a77101db0,PodSandboxId:aa7b9e74a00b50c83fd20b3c5c57446ab69d97de5bcad830fa7220c300f67041,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1728499785887195601,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-2g72b,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 27748278-bcf8-483d-9f4e-928de56f1737,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:990a1f4cf4dbd8bc6595a5fec421f84d7d6cf4caa141944bacdfcf52a70ca602,PodSandboxId:16c06aedb895a6173db4068faf6516b519339f6a1a9fcc5f02ac5d45494dd731,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Stat
e:CONTAINER_EXITED,CreatedAt:1728499765127539678,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-25wtz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0f6b1bd8-b6bb-4e40-b069-1d58797e7522,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8a9103c44c657fa327b5dc00387dd6fc1d9d7f02bc429e51c838abfdc7aca49,PodSandboxId:151a5c04d8eb5679225a06bc58e6e63480caa2b21ed133130bd11128a2f8701a,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52
b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1728499764418653441,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-4krpd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52536c40-3148-4827-9989-4a0b8eb2dd5a,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a4567afd2d60140e6acc64ed7cc0ae417fce099f3b4fb426808ba4455381d,PodSandboxId:1e69577e6e19708564f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},I
mageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728499751957608174,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0db9ba58d897e41a65b784c784284206b8e22915749301ce56100225c2255953,PodSandboxId:927ce4283ae091dc8d9fbbbad12d782d5c56509a23d8e1778ec07ebb62677b0e,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@s
ha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1728499730778128098,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f1cb904-3c3e-4c50-b15f-385022869b8e,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2,PodSandboxId:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744b
e9255,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728499710899992381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3,PodSandboxId:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metada
ta:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728499707168164382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod
.terminationGracePeriod: 30,},},&Container{Id:5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9,PodSandboxId:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728499704964242194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&
Container{Id:5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0,PodSandboxId:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728499693684933672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6902ff7c31989979513f93e0a1f83cd6f
56388007f443860c3784d8d7e7a139,PodSandboxId:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728499693700326668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b631e95bf64ef673c62c8365
fb17901cc9d3dc8731b798ba14c26c7e155d2d4b,PodSandboxId:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728499693678213932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3aab9ef167bb37ec2b65ba9ee7323586496af8d
304e6c2e546d54b8bfbe416,PodSandboxId:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728499693631591613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb8fb9b4-3229-4656-bd19
-b24094b28d94 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4ebcd866002d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          37 seconds ago      Running             busybox                   0                   f3e61dc7e25e6       busybox
	7e10b0fae1769       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   5559e240f9840       nginx
	b606f98cb5df5       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             11 minutes ago      Running             controller                0                   aa7b9e74a00b5       ingress-nginx-controller-bc57996ff-2g72b
	990a1f4cf4dbd       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago      Exited              patch                     1                   16c06aedb895a       ingress-nginx-admission-patch-25wtz
	f8a9103c44c65       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   11 minutes ago      Exited              create                    0                   151a5c04d8eb5       ingress-nginx-admission-create-4krpd
	254a4567afd2d       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        11 minutes ago      Running             metrics-server            0                   1e69577e6e197       metrics-server-84c5f94fbc-4s5xq
	0db9ba58d897e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             12 minutes ago      Running             minikube-ingress-dns      0                   927ce4283ae09       kube-ingress-dns-minikube
	9dc0d87ec4e28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   0911a44088ff1       storage-provisioner
	24e77cea269e4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   06bcb6b7d13a0       coredns-7c65d6cfc9-7nvgj
	5eb98519fb296       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   0da18081b4724       kube-proxy-98lbc
	f6902ff7c3198       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             12 minutes ago      Running             kube-controller-manager   0                   782b171e5be94       kube-controller-manager-addons-421083
	5752f9d7d67df       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                      0                   aca54aae7f3a7       etcd-addons-421083
	b631e95bf64ef       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             12 minutes ago      Running             kube-scheduler            0                   069f31703e472       kube-scheduler-addons-421083
	2e3aab9ef167b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             12 minutes ago      Running             kube-apiserver            0                   d3bfa5517f5dd       kube-apiserver-addons-421083
	
	
	==> coredns [24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3] <==
	[INFO] 10.244.0.7:52157 - 18181 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 97 false 1232" NXDOMAIN qr,aa,rd 179 0.000423075s
	[INFO] 10.244.0.7:52157 - 61904 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000160278s
	[INFO] 10.244.0.7:52157 - 19655 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000084884s
	[INFO] 10.244.0.7:52157 - 30540 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000104806s
	[INFO] 10.244.0.7:52157 - 5390 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076339s
	[INFO] 10.244.0.7:52157 - 29269 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000164334s
	[INFO] 10.244.0.7:52157 - 48153 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000087391s
	[INFO] 10.244.0.7:39719 - 42856 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00008114s
	[INFO] 10.244.0.7:39719 - 42586 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000047538s
	[INFO] 10.244.0.7:37862 - 61398 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000132918s
	[INFO] 10.244.0.7:37862 - 61601 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000402281s
	[INFO] 10.244.0.7:44503 - 32156 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000204209s
	[INFO] 10.244.0.7:44503 - 31881 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000137979s
	[INFO] 10.244.0.7:55985 - 53020 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057294s
	[INFO] 10.244.0.7:55985 - 52827 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000107881s
	[INFO] 10.244.0.21:45643 - 29018 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000409948s
	[INFO] 10.244.0.21:41087 - 7356 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000190682s
	[INFO] 10.244.0.21:55366 - 9280 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000498987s
	[INFO] 10.244.0.21:60162 - 3476 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009657s
	[INFO] 10.244.0.21:48188 - 43535 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109583s
	[INFO] 10.244.0.21:49778 - 44443 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000291938s
	[INFO] 10.244.0.21:35954 - 23251 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001047628s
	[INFO] 10.244.0.21:42535 - 23659 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001257326s
	[INFO] 10.244.0.26:43903 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000598771s
	[INFO] 10.244.0.26:32868 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000215717s
	
	
	==> describe nodes <==
	Name:               addons-421083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-421083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=addons-421083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T18_48_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-421083
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 18:48:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-421083
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:01:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:00:52 +0000   Wed, 09 Oct 2024 18:48:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:00:52 +0000   Wed, 09 Oct 2024 18:48:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:00:52 +0000   Wed, 09 Oct 2024 18:48:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:00:52 +0000   Wed, 09 Oct 2024 18:48:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    addons-421083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 75e2a6cd148147469f518c75962f3bbf
	  System UUID:                75e2a6cd-1481-4746-9f51-8c75962f3bbf
	  Boot ID:                    0e0c5f47-c02c-48b8-acd3-0a67c93483b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-hcpz4            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-2g72b    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-7nvgj                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-421083                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-421083                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-421083       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-98lbc                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-421083                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-4s5xq             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-421083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-421083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-421083 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)  kubelet          Node addons-421083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)  kubelet          Node addons-421083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)  kubelet          Node addons-421083 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                12m                kubelet          Node addons-421083 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node addons-421083 event: Registered Node addons-421083 in Controller
	
	
	==> dmesg <==
	[  +0.072513] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.846157] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.117755] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.009528] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.032849] kauditd_printk_skb: 157 callbacks suppressed
	[  +8.184113] kauditd_printk_skb: 36 callbacks suppressed
	[ +17.999127] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 9 18:49] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.669872] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.322968] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.434906] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.178339] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.167095] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.203594] kauditd_printk_skb: 4 callbacks suppressed
	[Oct 9 18:50] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 18:58] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.139384] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.651026] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.156133] kauditd_printk_skb: 42 callbacks suppressed
	[ +11.838672] kauditd_printk_skb: 42 callbacks suppressed
	[ +11.795204] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.658853] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.285162] kauditd_printk_skb: 27 callbacks suppressed
	[Oct 9 18:59] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 9 19:00] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0] <==
	{"level":"info","ts":"2024-10-09T18:49:18.924834Z","caller":"traceutil/trace.go:171","msg":"trace[903819024] linearizableReadLoop","detail":"{readStateIndex:996; appliedIndex:995; }","duration":"209.855963ms","start":"2024-10-09T18:49:18.714964Z","end":"2024-10-09T18:49:18.924820Z","steps":["trace[903819024] 'read index received'  (duration: 209.70917ms)","trace[903819024] 'applied index is now lower than readState.Index'  (duration: 146.358µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T18:49:18.924957Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.96148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:49:18.924976Z","caller":"traceutil/trace.go:171","msg":"trace[68883559] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:968; }","duration":"210.010383ms","start":"2024-10-09T18:49:18.714960Z","end":"2024-10-09T18:49:18.924971Z","steps":["trace[68883559] 'agreement among raft nodes before linearized reading'  (duration: 209.928277ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:49:18.925130Z","caller":"traceutil/trace.go:171","msg":"trace[1822636989] transaction","detail":"{read_only:false; response_revision:968; number_of_response:1; }","duration":"222.835999ms","start":"2024-10-09T18:49:18.702277Z","end":"2024-10-09T18:49:18.925113Z","steps":["trace[1822636989] 'process raft request'  (duration: 222.441235ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:49:19.145947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.535302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:49:19.146012Z","caller":"traceutil/trace.go:171","msg":"trace[1382877953] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:968; }","duration":"120.619504ms","start":"2024-10-09T18:49:19.025382Z","end":"2024-10-09T18:49:19.146002Z","steps":["trace[1382877953] 'range keys from in-memory index tree'  (duration: 120.434877ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:49:19.146170Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.775854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:49:19.146201Z","caller":"traceutil/trace.go:171","msg":"trace[1721120419] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:968; }","duration":"205.815551ms","start":"2024-10-09T18:49:18.940379Z","end":"2024-10-09T18:49:19.146194Z","steps":["trace[1721120419] 'range keys from in-memory index tree'  (duration: 205.725803ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:49:27.595961Z","caller":"traceutil/trace.go:171","msg":"trace[1766742759] transaction","detail":"{read_only:false; response_revision:1019; number_of_response:1; }","duration":"150.280237ms","start":"2024-10-09T18:49:27.445646Z","end":"2024-10-09T18:49:27.595926Z","steps":["trace[1766742759] 'process raft request'  (duration: 145.972212ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:49:43.170696Z","caller":"traceutil/trace.go:171","msg":"trace[1012372526] linearizableReadLoop","detail":"{readStateIndex:1148; appliedIndex:1147; }","duration":"256.466952ms","start":"2024-10-09T18:49:42.914206Z","end":"2024-10-09T18:49:43.170673Z","steps":["trace[1012372526] 'read index received'  (duration: 252.704542ms)","trace[1012372526] 'applied index is now lower than readState.Index'  (duration: 3.761549ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T18:49:43.170894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.657701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-09T18:49:43.170932Z","caller":"traceutil/trace.go:171","msg":"trace[122982882] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1114; }","duration":"256.722135ms","start":"2024-10-09T18:49:42.914202Z","end":"2024-10-09T18:49:43.170925Z","steps":["trace[122982882] 'agreement among raft nodes before linearized reading'  (duration: 256.597377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:49:43.171205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.810461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:49:43.171247Z","caller":"traceutil/trace.go:171","msg":"trace[989403056] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1114; }","duration":"145.862938ms","start":"2024-10-09T18:49:43.025377Z","end":"2024-10-09T18:49:43.171240Z","steps":["trace[989403056] 'agreement among raft nodes before linearized reading'  (duration: 145.791706ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:58:12.978697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.231892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:58:12.978906Z","caller":"traceutil/trace.go:171","msg":"trace[237117682] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2027; }","duration":"112.555179ms","start":"2024-10-09T18:58:12.866342Z","end":"2024-10-09T18:58:12.978897Z","steps":["trace[237117682] 'agreement among raft nodes before linearized reading'  (duration: 112.20562ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:58:12.978382Z","caller":"traceutil/trace.go:171","msg":"trace[1973752009] linearizableReadLoop","detail":"{readStateIndex:2172; appliedIndex:2171; }","duration":"112.012437ms","start":"2024-10-09T18:58:12.866345Z","end":"2024-10-09T18:58:12.978358Z","steps":["trace[1973752009] 'read index received'  (duration: 107.540151ms)","trace[1973752009] 'applied index is now lower than readState.Index'  (duration: 4.471457ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T18:58:14.526736Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-10-09T18:58:14.615134Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1502,"took":"87.786338ms","hash":1603094992,"current-db-size-bytes":6234112,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3518464,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-10-09T18:58:14.615185Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1603094992,"revision":1502,"compact-revision":-1}
	{"level":"info","ts":"2024-10-09T18:58:41.047459Z","caller":"traceutil/trace.go:171","msg":"trace[1415707454] linearizableReadLoop","detail":"{readStateIndex:2391; appliedIndex:2390; }","duration":"105.614078ms","start":"2024-10-09T18:58:40.941816Z","end":"2024-10-09T18:58:41.047431Z","steps":["trace[1415707454] 'read index received'  (duration: 105.391166ms)","trace[1415707454] 'applied index is now lower than readState.Index'  (duration: 222.452µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T18:58:41.047619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.773802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:58:41.047641Z","caller":"traceutil/trace.go:171","msg":"trace[918423557] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2232; }","duration":"105.823834ms","start":"2024-10-09T18:58:40.941811Z","end":"2024-10-09T18:58:41.047635Z","steps":["trace[918423557] 'agreement among raft nodes before linearized reading'  (duration: 105.758777ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:58:41.047737Z","caller":"traceutil/trace.go:171","msg":"trace[1215509480] transaction","detail":"{read_only:false; response_revision:2232; number_of_response:1; }","duration":"367.962133ms","start":"2024-10-09T18:58:40.679756Z","end":"2024-10-09T18:58:41.047719Z","steps":["trace[1215509480] 'process raft request'  (duration: 367.562227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:58:41.047846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-09T18:58:40.679739Z","time spent":"368.031201ms","remote":"127.0.0.1:45202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2198 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	
	
	==> kernel <==
	 19:01:11 up 13 min,  0 users,  load average: 0.20, 0.39, 0.33
	Linux addons-421083 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e3aab9ef167bb37ec2b65ba9ee7323586496af8d304e6c2e546d54b8bfbe416] <==
	 > logger="UnhandledError"
	E1009 18:50:20.122416       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.189.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.189.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.189.144:443: connect: connection refused" logger="UnhandledError"
	E1009 18:50:20.128396       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.189.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.189.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.189.144:443: connect: connection refused" logger="UnhandledError"
	I1009 18:50:20.187912       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1009 18:58:03.776001       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.91.111"}
	I1009 18:58:31.130124       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1009 18:58:32.186435       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1009 18:58:34.589694       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1009 18:58:48.521286       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 18:58:48.710844       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.244.253"}
	I1009 18:58:49.731782       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 18:59:21.986735       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:21.992546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:59:22.012155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:22.012246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:59:22.021961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:22.022021       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:59:22.126128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:22.126445       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:59:22.138603       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:22.138679       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 18:59:23.128402       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 18:59:23.139670       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 18:59:23.140520       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1009 19:01:09.869251       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.105.90"}
	
	
	==> kube-controller-manager [f6902ff7c31989979513f93e0a1f83cd6f56388007f443860c3784d8d7e7a139] <==
	W1009 18:59:43.223912       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:59:43.224099       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 18:59:43.255365       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:59:43.255423       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 18:59:54.027528       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:59:54.027690       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:03.297833       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:03.297915       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:04.212780       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:04.212976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:07.533589       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:07.533701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:41.117009       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:41.117353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:44.588504       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:44.588559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:46.595541       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:46.595615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:49.531307       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:49.531401       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 19:00:52.818843       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-421083"
	I1009 19:01:09.723337       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="35.744108ms"
	I1009 19:01:09.749674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="26.148815ms"
	I1009 19:01:09.768330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="18.594392ms"
	I1009 19:01:09.768445       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.293µs"
	
	
	==> kube-proxy [5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 18:48:25.914156       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 18:48:25.933709       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.156"]
	E1009 18:48:25.933781       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:48:26.021158       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 18:48:26.021221       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 18:48:26.021249       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:48:26.028960       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:48:26.029265       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:48:26.029279       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:48:26.038551       1 config.go:199] "Starting service config controller"
	I1009 18:48:26.038576       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:48:26.038620       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:48:26.038625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:48:26.039111       1 config.go:328] "Starting node config controller"
	I1009 18:48:26.039119       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:48:26.139252       1 shared_informer.go:320] Caches are synced for node config
	I1009 18:48:26.139279       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:48:26.139303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b631e95bf64ef673c62c8365fb17901cc9d3dc8731b798ba14c26c7e155d2d4b] <==
	W1009 18:48:16.081195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:48:16.081445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:16.943253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 18:48:16.943373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.028396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 18:48:17.028532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.065115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 18:48:17.065154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.166188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:48:17.166243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.171672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 18:48:17.171714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.200481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 18:48:17.200531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.211812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 18:48:17.212112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.223949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 18:48:17.223987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.286257       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 18:48:17.286349       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 18:48:17.306236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 18:48:17.306367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.312318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 18:48:17.312424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1009 18:48:19.268151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:01:08 addons-421083 kubelet[1205]: E1009 19:01:08.957787    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500468957008049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:574194,},InodesUsed:&UInt64Value{Value:197,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.726722    1205 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=37.288451059 podStartE2EDuration="11m18.726689908s" podCreationTimestamp="2024-10-09 18:49:51 +0000 UTC" firstStartedPulling="2024-10-09 18:49:52.341230882 +0000 UTC m=+93.978242561" lastFinishedPulling="2024-10-09 19:00:33.779469727 +0000 UTC m=+735.416481410" observedRunningTime="2024-10-09 19:00:34.624407997 +0000 UTC m=+736.261419694" watchObservedRunningTime="2024-10-09 19:01:09.726689908 +0000 UTC m=+771.363701605"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727571    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="494eeecd-f063-4c23-bc72-bbb7e8a13218" containerName="task-pv-container"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727655    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0953230e-3a9a-494e-97c6-faef913aa115" containerName="volume-snapshot-controller"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727703    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="csi-provisioner"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727735    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e2b2f817-c253-49b6-8345-271857327ef0" containerName="csi-attacher"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727766    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="csi-snapshotter"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727797    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="node-driver-registrar"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727828    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="hostpath"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727858    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="735e0cc5-1a6f-41e8-adfa-beaaee6751d3" containerName="volume-snapshot-controller"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727891    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="csi-external-health-monitor-controller"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727922    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ddf25048-aab5-4cbc-bfec-8219363e5c69" containerName="csi-resizer"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: E1009 19:01:09.727952    1205 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="liveness-probe"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728024    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="csi-external-health-monitor-controller"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728112    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="node-driver-registrar"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728143    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="liveness-probe"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728173    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="csi-provisioner"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728203    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="735e0cc5-1a6f-41e8-adfa-beaaee6751d3" containerName="volume-snapshot-controller"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728233    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddf25048-aab5-4cbc-bfec-8219363e5c69" containerName="csi-resizer"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728265    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="494eeecd-f063-4c23-bc72-bbb7e8a13218" containerName="task-pv-container"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728294    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="0953230e-3a9a-494e-97c6-faef913aa115" containerName="volume-snapshot-controller"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728327    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="hostpath"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728355    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="c05bb7d7-3592-48d1-85d1-b361a68e79aa" containerName="csi-snapshotter"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.728385    1205 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2b2f817-c253-49b6-8345-271857327ef0" containerName="csi-attacher"
	Oct 09 19:01:09 addons-421083 kubelet[1205]: I1009 19:01:09.798353    1205 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqsh\" (UniqueName: \"kubernetes.io/projected/813d520c-a411-406a-8178-0933a95697c4-kube-api-access-crqsh\") pod \"hello-world-app-55bf9c44b4-hcpz4\" (UID: \"813d520c-a411-406a-8178-0933a95697c4\") " pod="default/hello-world-app-55bf9c44b4-hcpz4"
	
	
	==> storage-provisioner [9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2] <==
	I1009 18:48:31.827201       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:48:31.932781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:48:31.932854       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:48:32.203526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:48:32.208253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-421083_0295e05b-8930-4ad1-a906-0a4a85bb781d!
	I1009 18:48:32.219726       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7c7228b5-eb03-4914-bf6e-0a6716f3b445", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-421083_0295e05b-8930-4ad1-a906-0a4a85bb781d became leader
	I1009 18:48:32.412488       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-421083_0295e05b-8930-4ad1-a906-0a4a85bb781d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-421083 -n addons-421083
helpers_test.go:261: (dbg) Run:  kubectl --context addons-421083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-hcpz4 ingress-nginx-admission-create-4krpd ingress-nginx-admission-patch-25wtz
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-421083 describe pod hello-world-app-55bf9c44b4-hcpz4 ingress-nginx-admission-create-4krpd ingress-nginx-admission-patch-25wtz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-421083 describe pod hello-world-app-55bf9c44b4-hcpz4 ingress-nginx-admission-create-4krpd ingress-nginx-admission-patch-25wtz: exit status 1 (71.269634ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-hcpz4
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-421083/192.168.39.156
	Start Time:       Wed, 09 Oct 2024 19:01:09 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-crqsh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-crqsh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-hcpz4 to addons-421083
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4krpd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-25wtz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-421083 describe pod hello-world-app-55bf9c44b4-hcpz4 ingress-nginx-admission-create-4krpd ingress-nginx-admission-patch-25wtz: exit status 1
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 addons disable ingress-dns --alsologtostderr -v=1: (1.64119452s)
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable ingress --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 addons disable ingress --alsologtostderr -v=1: (7.719612209s)
--- FAIL: TestAddons/parallel/Ingress (153.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (320.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.428374ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4s5xq" [cd71806c-0308-466b-917f-085718fee448] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004934488s
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (73.945055ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 9m56.564169834s

                                                
                                                
** /stderr **
I1009 18:58:19.565865   16607 retry.go:31] will retry after 2.80313961s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (65.823684ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 9m59.433974109s

                                                
                                                
** /stderr **
I1009 18:58:22.435802   16607 retry.go:31] will retry after 3.514864597s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (68.636993ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 10m3.018203362s

                                                
                                                
** /stderr **
I1009 18:58:26.020007   16607 retry.go:31] will retry after 3.406218137s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (63.422626ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 10m6.488838827s

                                                
                                                
** /stderr **
I1009 18:58:29.490692   16607 retry.go:31] will retry after 10.887505592s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (73.164652ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 10m17.449954494s

                                                
                                                
** /stderr **
I1009 18:58:40.451986   16607 retry.go:31] will retry after 9.267924191s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (68.563715ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 10m26.787653444s

                                                
                                                
** /stderr **
I1009 18:58:49.789403   16607 retry.go:31] will retry after 21.991969541s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (61.984188ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 10m48.842837762s

                                                
                                                
** /stderr **
I1009 18:59:11.844572   16607 retry.go:31] will retry after 30.391934386s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (62.981567ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 11m19.298622966s

                                                
                                                
** /stderr **
I1009 18:59:42.300534   16607 retry.go:31] will retry after 30.337240078s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (60.688487ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 11m49.697478609s

                                                
                                                
** /stderr **
I1009 19:00:12.699175   16607 retry.go:31] will retry after 1m14.323924849s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (62.710964ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 13m4.088195732s

                                                
                                                
** /stderr **
I1009 19:01:27.090279   16607 retry.go:31] will retry after 38.541025396s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (62.932215ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 13m42.695954643s

                                                
                                                
** /stderr **
I1009 19:02:05.697764   16607 retry.go:31] will retry after 1m26.714398701s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-421083 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-421083 top pods -n kube-system: exit status 1 (64.949155ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-7nvgj, age: 15m9.475548149s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-421083 -n addons-421083
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 logs -n 25: (1.230995591s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-944932                                                                     | download-only-944932 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| delete  | -p download-only-988518                                                                     | download-only-988518 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-505183 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC |                     |
	|         | binary-mirror-505183                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:43333                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-505183                                                                     | binary-mirror-505183 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC |                     |
	|         | addons-421083                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC |                     |
	|         | addons-421083                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-421083 --wait=true                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:49 UTC | 09 Oct 24 18:49 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:58 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | -p addons-421083                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-421083 ssh cat                                                                       | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | /opt/local-path-provisioner/pvc-e5d4b64b-252d-4269-93cd-d7941b14a023_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-421083 ip                                                                            | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-421083 ssh curl -s                                                                   | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-421083 addons                                                                        | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-421083 ip                                                                            | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 19:01 UTC | 09 Oct 24 19:01 UTC |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 19:01 UTC | 09 Oct 24 19:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-421083 addons disable                                                                | addons-421083        | jenkins | v1.34.0 | 09 Oct 24 19:01 UTC | 09 Oct 24 19:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:47:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:47:36.919131   17401 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:47:36.919268   17401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:36.919278   17401 out.go:358] Setting ErrFile to fd 2...
	I1009 18:47:36.919285   17401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:36.919470   17401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 18:47:36.920079   17401 out.go:352] Setting JSON to false
	I1009 18:47:36.920885   17401 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1798,"bootTime":1728497859,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:47:36.920984   17401 start.go:139] virtualization: kvm guest
	I1009 18:47:36.922972   17401 out.go:177] * [addons-421083] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 18:47:36.924203   17401 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:47:36.924202   17401 notify.go:220] Checking for updates...
	I1009 18:47:36.925482   17401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:47:36.926648   17401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 18:47:36.927811   17401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 18:47:36.928991   17401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:47:36.930220   17401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:47:36.931405   17401 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:47:36.962269   17401 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 18:47:36.963305   17401 start.go:297] selected driver: kvm2
	I1009 18:47:36.963317   17401 start.go:901] validating driver "kvm2" against <nil>
	I1009 18:47:36.963327   17401 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:47:36.964029   17401 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:47:36.964104   17401 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:47:36.978384   17401 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 18:47:36.978421   17401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:47:36.978675   17401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:47:36.978709   17401 cni.go:84] Creating CNI manager for ""
	I1009 18:47:36.978771   17401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:47:36.978779   17401 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:47:36.978836   17401 start.go:340] cluster config:
	{Name:addons-421083 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:36.978944   17401 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:47:36.980666   17401 out.go:177] * Starting "addons-421083" primary control-plane node in "addons-421083" cluster
	I1009 18:47:36.981893   17401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:36.981928   17401 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:47:36.981938   17401 cache.go:56] Caching tarball of preloaded images
	I1009 18:47:36.982016   17401 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:47:36.982027   17401 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 18:47:36.982319   17401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/config.json ...
	I1009 18:47:36.982338   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/config.json: {Name:mk8bd821ac2bab660fc018f0f8c608bab2497d17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:36.982466   17401 start.go:360] acquireMachinesLock for addons-421083: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 18:47:36.982509   17401 start.go:364] duration metric: took 31.338µs to acquireMachinesLock for "addons-421083"
	I1009 18:47:36.982525   17401 start.go:93] Provisioning new machine with config: &{Name:addons-421083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:47:36.982580   17401 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 18:47:36.984137   17401 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1009 18:47:36.984283   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:47:36.984321   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:47:36.997940   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I1009 18:47:36.998290   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:47:36.998850   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:47:36.998899   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:47:36.999274   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:47:36.999445   17401 main.go:141] libmachine: (addons-421083) Calling .GetMachineName
	I1009 18:47:36.999563   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:47:36.999716   17401 start.go:159] libmachine.API.Create for "addons-421083" (driver="kvm2")
	I1009 18:47:36.999745   17401 client.go:168] LocalClient.Create starting
	I1009 18:47:36.999785   17401 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 18:47:37.331686   17401 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 18:47:37.435435   17401 main.go:141] libmachine: Running pre-create checks...
	I1009 18:47:37.435458   17401 main.go:141] libmachine: (addons-421083) Calling .PreCreateCheck
	I1009 18:47:37.435983   17401 main.go:141] libmachine: (addons-421083) Calling .GetConfigRaw
	I1009 18:47:37.436428   17401 main.go:141] libmachine: Creating machine...
	I1009 18:47:37.436443   17401 main.go:141] libmachine: (addons-421083) Calling .Create
	I1009 18:47:37.436583   17401 main.go:141] libmachine: (addons-421083) Creating KVM machine...
	I1009 18:47:37.437676   17401 main.go:141] libmachine: (addons-421083) DBG | found existing default KVM network
	I1009 18:47:37.438360   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:37.438220   17423 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a60}
	I1009 18:47:37.438405   17401 main.go:141] libmachine: (addons-421083) DBG | created network xml: 
	I1009 18:47:37.438425   17401 main.go:141] libmachine: (addons-421083) DBG | <network>
	I1009 18:47:37.438435   17401 main.go:141] libmachine: (addons-421083) DBG |   <name>mk-addons-421083</name>
	I1009 18:47:37.438445   17401 main.go:141] libmachine: (addons-421083) DBG |   <dns enable='no'/>
	I1009 18:47:37.438452   17401 main.go:141] libmachine: (addons-421083) DBG |   
	I1009 18:47:37.438465   17401 main.go:141] libmachine: (addons-421083) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 18:47:37.438475   17401 main.go:141] libmachine: (addons-421083) DBG |     <dhcp>
	I1009 18:47:37.438482   17401 main.go:141] libmachine: (addons-421083) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 18:47:37.438494   17401 main.go:141] libmachine: (addons-421083) DBG |     </dhcp>
	I1009 18:47:37.438507   17401 main.go:141] libmachine: (addons-421083) DBG |   </ip>
	I1009 18:47:37.438517   17401 main.go:141] libmachine: (addons-421083) DBG |   
	I1009 18:47:37.438527   17401 main.go:141] libmachine: (addons-421083) DBG | </network>
	I1009 18:47:37.438535   17401 main.go:141] libmachine: (addons-421083) DBG | 
	I1009 18:47:37.443692   17401 main.go:141] libmachine: (addons-421083) DBG | trying to create private KVM network mk-addons-421083 192.168.39.0/24...
	I1009 18:47:37.506082   17401 main.go:141] libmachine: (addons-421083) DBG | private KVM network mk-addons-421083 192.168.39.0/24 created
	I1009 18:47:37.506113   17401 main.go:141] libmachine: (addons-421083) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083 ...
	I1009 18:47:37.506128   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:37.506023   17423 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 18:47:37.506154   17401 main.go:141] libmachine: (addons-421083) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 18:47:37.506330   17401 main.go:141] libmachine: (addons-421083) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 18:47:37.766177   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:37.766047   17423 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa...
	I1009 18:47:38.007798   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:38.007670   17423 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/addons-421083.rawdisk...
	I1009 18:47:38.007832   17401 main.go:141] libmachine: (addons-421083) DBG | Writing magic tar header
	I1009 18:47:38.007847   17401 main.go:141] libmachine: (addons-421083) DBG | Writing SSH key tar header
	I1009 18:47:38.007859   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:38.007787   17423 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083 ...
	I1009 18:47:38.007876   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083
	I1009 18:47:38.007949   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083 (perms=drwx------)
	I1009 18:47:38.007978   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 18:47:38.007989   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 18:47:38.008002   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 18:47:38.008012   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 18:47:38.008023   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 18:47:38.008033   17401 main.go:141] libmachine: (addons-421083) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 18:47:38.008047   17401 main.go:141] libmachine: (addons-421083) Creating domain...
	I1009 18:47:38.008061   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 18:47:38.008075   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 18:47:38.008084   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 18:47:38.008093   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home/jenkins
	I1009 18:47:38.008101   17401 main.go:141] libmachine: (addons-421083) DBG | Checking permissions on dir: /home
	I1009 18:47:38.008110   17401 main.go:141] libmachine: (addons-421083) DBG | Skipping /home - not owner
	I1009 18:47:38.009054   17401 main.go:141] libmachine: (addons-421083) define libvirt domain using xml: 
	I1009 18:47:38.009087   17401 main.go:141] libmachine: (addons-421083) <domain type='kvm'>
	I1009 18:47:38.009109   17401 main.go:141] libmachine: (addons-421083)   <name>addons-421083</name>
	I1009 18:47:38.009119   17401 main.go:141] libmachine: (addons-421083)   <memory unit='MiB'>4000</memory>
	I1009 18:47:38.009127   17401 main.go:141] libmachine: (addons-421083)   <vcpu>2</vcpu>
	I1009 18:47:38.009131   17401 main.go:141] libmachine: (addons-421083)   <features>
	I1009 18:47:38.009151   17401 main.go:141] libmachine: (addons-421083)     <acpi/>
	I1009 18:47:38.009165   17401 main.go:141] libmachine: (addons-421083)     <apic/>
	I1009 18:47:38.009172   17401 main.go:141] libmachine: (addons-421083)     <pae/>
	I1009 18:47:38.009177   17401 main.go:141] libmachine: (addons-421083)     
	I1009 18:47:38.009182   17401 main.go:141] libmachine: (addons-421083)   </features>
	I1009 18:47:38.009189   17401 main.go:141] libmachine: (addons-421083)   <cpu mode='host-passthrough'>
	I1009 18:47:38.009194   17401 main.go:141] libmachine: (addons-421083)   
	I1009 18:47:38.009202   17401 main.go:141] libmachine: (addons-421083)   </cpu>
	I1009 18:47:38.009207   17401 main.go:141] libmachine: (addons-421083)   <os>
	I1009 18:47:38.009214   17401 main.go:141] libmachine: (addons-421083)     <type>hvm</type>
	I1009 18:47:38.009228   17401 main.go:141] libmachine: (addons-421083)     <boot dev='cdrom'/>
	I1009 18:47:38.009238   17401 main.go:141] libmachine: (addons-421083)     <boot dev='hd'/>
	I1009 18:47:38.009262   17401 main.go:141] libmachine: (addons-421083)     <bootmenu enable='no'/>
	I1009 18:47:38.009290   17401 main.go:141] libmachine: (addons-421083)   </os>
	I1009 18:47:38.009299   17401 main.go:141] libmachine: (addons-421083)   <devices>
	I1009 18:47:38.009307   17401 main.go:141] libmachine: (addons-421083)     <disk type='file' device='cdrom'>
	I1009 18:47:38.009318   17401 main.go:141] libmachine: (addons-421083)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/boot2docker.iso'/>
	I1009 18:47:38.009326   17401 main.go:141] libmachine: (addons-421083)       <target dev='hdc' bus='scsi'/>
	I1009 18:47:38.009331   17401 main.go:141] libmachine: (addons-421083)       <readonly/>
	I1009 18:47:38.009336   17401 main.go:141] libmachine: (addons-421083)     </disk>
	I1009 18:47:38.009343   17401 main.go:141] libmachine: (addons-421083)     <disk type='file' device='disk'>
	I1009 18:47:38.009351   17401 main.go:141] libmachine: (addons-421083)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 18:47:38.009363   17401 main.go:141] libmachine: (addons-421083)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/addons-421083.rawdisk'/>
	I1009 18:47:38.009370   17401 main.go:141] libmachine: (addons-421083)       <target dev='hda' bus='virtio'/>
	I1009 18:47:38.009384   17401 main.go:141] libmachine: (addons-421083)     </disk>
	I1009 18:47:38.009402   17401 main.go:141] libmachine: (addons-421083)     <interface type='network'>
	I1009 18:47:38.009415   17401 main.go:141] libmachine: (addons-421083)       <source network='mk-addons-421083'/>
	I1009 18:47:38.009424   17401 main.go:141] libmachine: (addons-421083)       <model type='virtio'/>
	I1009 18:47:38.009430   17401 main.go:141] libmachine: (addons-421083)     </interface>
	I1009 18:47:38.009436   17401 main.go:141] libmachine: (addons-421083)     <interface type='network'>
	I1009 18:47:38.009442   17401 main.go:141] libmachine: (addons-421083)       <source network='default'/>
	I1009 18:47:38.009448   17401 main.go:141] libmachine: (addons-421083)       <model type='virtio'/>
	I1009 18:47:38.009453   17401 main.go:141] libmachine: (addons-421083)     </interface>
	I1009 18:47:38.009459   17401 main.go:141] libmachine: (addons-421083)     <serial type='pty'>
	I1009 18:47:38.009465   17401 main.go:141] libmachine: (addons-421083)       <target port='0'/>
	I1009 18:47:38.009477   17401 main.go:141] libmachine: (addons-421083)     </serial>
	I1009 18:47:38.009488   17401 main.go:141] libmachine: (addons-421083)     <console type='pty'>
	I1009 18:47:38.009505   17401 main.go:141] libmachine: (addons-421083)       <target type='serial' port='0'/>
	I1009 18:47:38.009516   17401 main.go:141] libmachine: (addons-421083)     </console>
	I1009 18:47:38.009521   17401 main.go:141] libmachine: (addons-421083)     <rng model='virtio'>
	I1009 18:47:38.009527   17401 main.go:141] libmachine: (addons-421083)       <backend model='random'>/dev/random</backend>
	I1009 18:47:38.009533   17401 main.go:141] libmachine: (addons-421083)     </rng>
	I1009 18:47:38.009547   17401 main.go:141] libmachine: (addons-421083)     
	I1009 18:47:38.009559   17401 main.go:141] libmachine: (addons-421083)     
	I1009 18:47:38.009566   17401 main.go:141] libmachine: (addons-421083)   </devices>
	I1009 18:47:38.009570   17401 main.go:141] libmachine: (addons-421083) </domain>
	I1009 18:47:38.009576   17401 main.go:141] libmachine: (addons-421083) 
	I1009 18:47:38.015758   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:91:11:d6 in network default
	I1009 18:47:38.016255   17401 main.go:141] libmachine: (addons-421083) Ensuring networks are active...
	I1009 18:47:38.016273   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:38.016892   17401 main.go:141] libmachine: (addons-421083) Ensuring network default is active
	I1009 18:47:38.017146   17401 main.go:141] libmachine: (addons-421083) Ensuring network mk-addons-421083 is active
	I1009 18:47:38.018465   17401 main.go:141] libmachine: (addons-421083) Getting domain xml...
	I1009 18:47:38.019101   17401 main.go:141] libmachine: (addons-421083) Creating domain...
	I1009 18:47:39.430327   17401 main.go:141] libmachine: (addons-421083) Waiting to get IP...
	I1009 18:47:39.431067   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:39.431443   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:39.431503   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:39.431440   17423 retry.go:31] will retry after 262.024745ms: waiting for machine to come up
	I1009 18:47:39.695075   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:39.695601   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:39.695630   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:39.695542   17423 retry.go:31] will retry after 388.91699ms: waiting for machine to come up
	I1009 18:47:40.086047   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:40.086501   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:40.086536   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:40.086453   17423 retry.go:31] will retry after 325.478066ms: waiting for machine to come up
	I1009 18:47:40.414233   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:40.414744   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:40.414767   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:40.414704   17423 retry.go:31] will retry after 425.338344ms: waiting for machine to come up
	I1009 18:47:40.841260   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:40.841780   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:40.841819   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:40.841733   17423 retry.go:31] will retry after 735.054961ms: waiting for machine to come up
	I1009 18:47:41.578571   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:41.578975   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:41.578999   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:41.578938   17423 retry.go:31] will retry after 879.023333ms: waiting for machine to come up
	I1009 18:47:42.459480   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:42.460097   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:42.460126   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:42.460058   17423 retry.go:31] will retry after 1.0961467s: waiting for machine to come up
	I1009 18:47:43.558333   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:43.558716   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:43.558746   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:43.558674   17423 retry.go:31] will retry after 1.435955653s: waiting for machine to come up
	I1009 18:47:44.996421   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:44.996783   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:44.996809   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:44.996743   17423 retry.go:31] will retry after 1.468799411s: waiting for machine to come up
	I1009 18:47:46.466652   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:46.467054   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:46.467080   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:46.467019   17423 retry.go:31] will retry after 1.987591191s: waiting for machine to come up
	I1009 18:47:48.457235   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:48.457690   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:48.457718   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:48.457639   17423 retry.go:31] will retry after 2.254440714s: waiting for machine to come up
	I1009 18:47:50.713161   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:50.713641   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:50.713666   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:50.713606   17423 retry.go:31] will retry after 2.487139058s: waiting for machine to come up
	I1009 18:47:53.202934   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:53.203455   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:53.203495   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:53.203405   17423 retry.go:31] will retry after 3.308396575s: waiting for machine to come up
	I1009 18:47:56.515692   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:47:56.516102   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find current IP address of domain addons-421083 in network mk-addons-421083
	I1009 18:47:56.516124   17401 main.go:141] libmachine: (addons-421083) DBG | I1009 18:47:56.516062   17423 retry.go:31] will retry after 4.310196536s: waiting for machine to come up
	I1009 18:48:00.830339   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:00.830821   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has current primary IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:00.830842   17401 main.go:141] libmachine: (addons-421083) Found IP for machine: 192.168.39.156
	I1009 18:48:00.830877   17401 main.go:141] libmachine: (addons-421083) Reserving static IP address...
	I1009 18:48:00.831223   17401 main.go:141] libmachine: (addons-421083) DBG | unable to find host DHCP lease matching {name: "addons-421083", mac: "52:54:00:90:f5:45", ip: "192.168.39.156"} in network mk-addons-421083
	I1009 18:48:00.898325   17401 main.go:141] libmachine: (addons-421083) DBG | Getting to WaitForSSH function...
	I1009 18:48:00.898358   17401 main.go:141] libmachine: (addons-421083) Reserved static IP address: 192.168.39.156
	I1009 18:48:00.898370   17401 main.go:141] libmachine: (addons-421083) Waiting for SSH to be available...
	I1009 18:48:00.900672   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:00.901110   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:minikube Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:00.901139   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:00.901342   17401 main.go:141] libmachine: (addons-421083) DBG | Using SSH client type: external
	I1009 18:48:00.901368   17401 main.go:141] libmachine: (addons-421083) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa (-rw-------)
	I1009 18:48:00.901402   17401 main.go:141] libmachine: (addons-421083) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.156 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 18:48:00.901428   17401 main.go:141] libmachine: (addons-421083) DBG | About to run SSH command:
	I1009 18:48:00.901443   17401 main.go:141] libmachine: (addons-421083) DBG | exit 0
	I1009 18:48:01.030919   17401 main.go:141] libmachine: (addons-421083) DBG | SSH cmd err, output: <nil>: 
	I1009 18:48:01.031169   17401 main.go:141] libmachine: (addons-421083) KVM machine creation complete!
	I1009 18:48:01.031492   17401 main.go:141] libmachine: (addons-421083) Calling .GetConfigRaw
	I1009 18:48:01.031988   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:01.032145   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:01.032303   17401 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 18:48:01.032314   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:01.033441   17401 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 18:48:01.033456   17401 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 18:48:01.033465   17401 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 18:48:01.033473   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.035611   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.035965   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.035991   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.036065   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.036213   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.036366   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.036510   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.036666   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.036832   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.036843   17401 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 18:48:01.142336   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:48:01.142357   17401 main.go:141] libmachine: Detecting the provisioner...
	I1009 18:48:01.142365   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.144998   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.145322   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.145351   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.145498   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.145669   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.145835   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.145975   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.146132   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.146288   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.146299   17401 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 18:48:01.251406   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 18:48:01.251456   17401 main.go:141] libmachine: found compatible host: buildroot
	I1009 18:48:01.251461   17401 main.go:141] libmachine: Provisioning with buildroot...
	I1009 18:48:01.251475   17401 main.go:141] libmachine: (addons-421083) Calling .GetMachineName
	I1009 18:48:01.251684   17401 buildroot.go:166] provisioning hostname "addons-421083"
	I1009 18:48:01.251706   17401 main.go:141] libmachine: (addons-421083) Calling .GetMachineName
	I1009 18:48:01.251880   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.254199   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.254503   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.254531   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.254653   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.254818   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.254937   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.255078   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.255255   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.255467   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.255486   17401 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-421083 && echo "addons-421083" | sudo tee /etc/hostname
	I1009 18:48:01.373395   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-421083
	
	I1009 18:48:01.373425   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.375901   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.376289   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.376313   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.376475   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.376657   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.376789   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.376920   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.377083   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.377254   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.377277   17401 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-421083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-421083/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-421083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:48:01.493914   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:48:01.493942   17401 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 18:48:01.493991   17401 buildroot.go:174] setting up certificates
	I1009 18:48:01.494007   17401 provision.go:84] configureAuth start
	I1009 18:48:01.494018   17401 main.go:141] libmachine: (addons-421083) Calling .GetMachineName
	I1009 18:48:01.494259   17401 main.go:141] libmachine: (addons-421083) Calling .GetIP
	I1009 18:48:01.496681   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.497081   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.497104   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.497223   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.499886   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.500217   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.500245   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.500288   17401 provision.go:143] copyHostCerts
	I1009 18:48:01.500368   17401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 18:48:01.500494   17401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 18:48:01.500583   17401 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 18:48:01.500630   17401 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.addons-421083 san=[127.0.0.1 192.168.39.156 addons-421083 localhost minikube]
	I1009 18:48:01.803364   17401 provision.go:177] copyRemoteCerts
	I1009 18:48:01.803416   17401 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:48:01.803437   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.805981   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.806295   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.806324   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.806464   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.806662   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.806810   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.806927   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:01.889553   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:48:01.913620   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:48:01.936905   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 18:48:01.960014   17401 provision.go:87] duration metric: took 465.99311ms to configureAuth
	I1009 18:48:01.960042   17401 buildroot.go:189] setting minikube options for container-runtime
	I1009 18:48:01.960241   17401 config.go:182] Loaded profile config "addons-421083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:48:01.960317   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:01.963075   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.963419   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:01.963460   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:01.963601   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:01.963785   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.963939   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:01.964063   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:01.964206   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:01.964382   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:01.964401   17401 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:48:02.190106   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:48:02.190144   17401 main.go:141] libmachine: Checking connection to Docker...
	I1009 18:48:02.190156   17401 main.go:141] libmachine: (addons-421083) Calling .GetURL
	I1009 18:48:02.191369   17401 main.go:141] libmachine: (addons-421083) DBG | Using libvirt version 6000000
	I1009 18:48:02.193485   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.193859   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.193887   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.194019   17401 main.go:141] libmachine: Docker is up and running!
	I1009 18:48:02.194034   17401 main.go:141] libmachine: Reticulating splines...
	I1009 18:48:02.194042   17401 client.go:171] duration metric: took 25.194285944s to LocalClient.Create
	I1009 18:48:02.194070   17401 start.go:167] duration metric: took 25.194353336s to libmachine.API.Create "addons-421083"
	I1009 18:48:02.194088   17401 start.go:293] postStartSetup for "addons-421083" (driver="kvm2")
	I1009 18:48:02.194103   17401 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:48:02.194124   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.194340   17401 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:48:02.194363   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:02.196373   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.196652   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.196672   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.196791   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:02.196930   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.197056   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:02.197157   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:02.277130   17401 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:48:02.281365   17401 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 18:48:02.281391   17401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 18:48:02.281474   17401 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 18:48:02.281506   17401 start.go:296] duration metric: took 87.409181ms for postStartSetup
	I1009 18:48:02.281540   17401 main.go:141] libmachine: (addons-421083) Calling .GetConfigRaw
	I1009 18:48:02.282055   17401 main.go:141] libmachine: (addons-421083) Calling .GetIP
	I1009 18:48:02.284406   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.284731   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.284757   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.284934   17401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/config.json ...
	I1009 18:48:02.285120   17401 start.go:128] duration metric: took 25.302528351s to createHost
	I1009 18:48:02.285140   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:02.287015   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.287341   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.287367   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.287516   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:02.287680   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.287802   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.287910   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:02.288034   17401 main.go:141] libmachine: Using SSH client type: native
	I1009 18:48:02.288218   17401 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I1009 18:48:02.288231   17401 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 18:48:02.395749   17401 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728499682.371890623
	
	I1009 18:48:02.395770   17401 fix.go:216] guest clock: 1728499682.371890623
	I1009 18:48:02.395777   17401 fix.go:229] Guest: 2024-10-09 18:48:02.371890623 +0000 UTC Remote: 2024-10-09 18:48:02.285131602 +0000 UTC m=+25.400487636 (delta=86.759021ms)
	I1009 18:48:02.395800   17401 fix.go:200] guest clock delta is within tolerance: 86.759021ms
	I1009 18:48:02.395807   17401 start.go:83] releasing machines lock for "addons-421083", held for 25.413289434s
	I1009 18:48:02.395835   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.396064   17401 main.go:141] libmachine: (addons-421083) Calling .GetIP
	I1009 18:48:02.398584   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.398954   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.398990   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.399113   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.399660   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.399829   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:02.399913   17401 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:48:02.399967   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:02.399968   17401 ssh_runner.go:195] Run: cat /version.json
	I1009 18:48:02.400017   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:02.402492   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.402673   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.402814   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.402842   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.402956   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:02.402967   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:02.402980   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:02.403146   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.403198   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:02.403318   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:02.403376   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:02.403450   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:02.403839   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:02.403945   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:02.480633   17401 ssh_runner.go:195] Run: systemctl --version
	I1009 18:48:02.509100   17401 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:48:02.669262   17401 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 18:48:02.674791   17401 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 18:48:02.674854   17401 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:48:02.692275   17401 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 18:48:02.692297   17401 start.go:495] detecting cgroup driver to use...
	I1009 18:48:02.692357   17401 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:48:02.708890   17401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:48:02.722433   17401 docker.go:217] disabling cri-docker service (if available) ...
	I1009 18:48:02.722490   17401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:48:02.735669   17401 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:48:02.748859   17401 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:48:02.866868   17401 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:48:03.031024   17401 docker.go:233] disabling docker service ...
	I1009 18:48:03.031122   17401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:48:03.046146   17401 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:48:03.059418   17401 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:48:03.167969   17401 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:48:03.282724   17401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:48:03.296703   17401 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:48:03.314454   17401 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 18:48:03.314523   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.324913   17401 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:48:03.324969   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.335108   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.345321   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.355784   17401 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:48:03.366770   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.377216   17401 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.393604   17401 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:48:03.403613   17401 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:48:03.412803   17401 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:48:03.412852   17401 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:48:03.427090   17401 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:48:03.437364   17401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:48:03.549313   17401 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:48:03.645167   17401 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:48:03.645276   17401 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:48:03.649832   17401 start.go:563] Will wait 60s for crictl version
	I1009 18:48:03.649895   17401 ssh_runner.go:195] Run: which crictl
	I1009 18:48:03.653543   17401 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:48:03.696440   17401 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 18:48:03.696572   17401 ssh_runner.go:195] Run: crio --version
	I1009 18:48:03.723729   17401 ssh_runner.go:195] Run: crio --version
	I1009 18:48:03.753445   17401 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 18:48:03.754568   17401 main.go:141] libmachine: (addons-421083) Calling .GetIP
	I1009 18:48:03.757062   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:03.757375   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:03.757402   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:03.757605   17401 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 18:48:03.761539   17401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:48:03.773458   17401 kubeadm.go:883] updating cluster {Name:addons-421083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:48:03.773582   17401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:48:03.773640   17401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:48:03.804576   17401 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 18:48:03.804637   17401 ssh_runner.go:195] Run: which lz4
	I1009 18:48:03.808884   17401 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 18:48:03.813214   17401 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 18:48:03.813241   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 18:48:05.084237   17401 crio.go:462] duration metric: took 1.275371492s to copy over tarball
	I1009 18:48:05.084338   17401 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 18:48:07.168124   17401 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.083721492s)
	I1009 18:48:07.168152   17401 crio.go:469] duration metric: took 2.083874293s to extract the tarball
	I1009 18:48:07.168162   17401 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 18:48:07.204594   17401 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:48:07.245226   17401 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:48:07.245247   17401 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:48:07.245256   17401 kubeadm.go:934] updating node { 192.168.39.156 8443 v1.31.1 crio true true} ...
	I1009 18:48:07.245376   17401 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-421083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:48:07.245454   17401 ssh_runner.go:195] Run: crio config
	I1009 18:48:07.290260   17401 cni.go:84] Creating CNI manager for ""
	I1009 18:48:07.290286   17401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:48:07.290322   17401 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 18:48:07.290344   17401 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-421083 NodeName:addons-421083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:48:07.290463   17401 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-421083"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:48:07.290524   17401 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 18:48:07.300488   17401 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:48:07.300579   17401 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:48:07.309650   17401 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1009 18:48:07.325786   17401 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:48:07.342140   17401 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1009 18:48:07.358622   17401 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I1009 18:48:07.362600   17401 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.156	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:48:07.374477   17401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:48:07.485056   17401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:48:07.502430   17401 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083 for IP: 192.168.39.156
	I1009 18:48:07.502456   17401 certs.go:194] generating shared ca certs ...
	I1009 18:48:07.502478   17401 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.502634   17401 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 18:48:07.613829   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt ...
	I1009 18:48:07.613862   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt: {Name:mkd74ce774b5650363e1df082fa10c8cece0b7f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.614055   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key ...
	I1009 18:48:07.614070   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key: {Name:mk4789884a13b38a73e51d5c1c8759c998d7f013 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.614186   17401 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 18:48:07.800680   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt ...
	I1009 18:48:07.800711   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt: {Name:mkb557c5d244639ebef20bbe3aff9ae718550707 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.800879   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key ...
	I1009 18:48:07.800889   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key: {Name:mk5ec2b0aefcc430750ca0126384175e68dc86da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:07.800958   17401 certs.go:256] generating profile certs ...
	I1009 18:48:07.801011   17401 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.key
	I1009 18:48:07.801031   17401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt with IP's: []
	I1009 18:48:08.067278   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt ...
	I1009 18:48:08.067305   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: {Name:mk59146854d725388c4dd57b83785f3c38be0fcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.067456   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.key ...
	I1009 18:48:08.067465   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.key: {Name:mkfe4cce716a96d331355a3d3fdeccb1cddc5ac8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.067534   17401 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key.499d8342
	I1009 18:48:08.067551   17401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt.499d8342 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.156]
	I1009 18:48:08.178724   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt.499d8342 ...
	I1009 18:48:08.178750   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt.499d8342: {Name:mkc5352535e88481616dd4eefcb57376b1e04b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.178894   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key.499d8342 ...
	I1009 18:48:08.178905   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key.499d8342: {Name:mk99ad422c16af24903e5c16277883291bc9af71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.178972   17401 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt.499d8342 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt
	I1009 18:48:08.179039   17401 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key.499d8342 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key
	I1009 18:48:08.179120   17401 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.key
	I1009 18:48:08.179144   17401 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.crt with IP's: []
	I1009 18:48:08.356797   17401 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.crt ...
	I1009 18:48:08.356832   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.crt: {Name:mk9c7e610bc33161325374a91664eaebd6756667 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.357010   17401 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.key ...
	I1009 18:48:08.357023   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.key: {Name:mkd9569ac90f623608f9055d0e9e2641756234a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:08.357213   17401 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:48:08.357249   17401 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 18:48:08.357281   17401 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:48:08.357313   17401 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 18:48:08.357905   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:48:08.385490   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:48:08.408937   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:48:08.431878   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 18:48:08.458917   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:48:08.483605   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:48:08.510051   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:48:08.534864   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 18:48:08.559913   17401 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:48:08.585173   17401 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:48:08.603075   17401 ssh_runner.go:195] Run: openssl version
	I1009 18:48:08.609004   17401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:48:08.619670   17401 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:48:08.624351   17401 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:48:08.624400   17401 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:48:08.630368   17401 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:48:08.641154   17401 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:48:08.645302   17401 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:48:08.645359   17401 kubeadm.go:392] StartCluster: {Name:addons-421083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-421083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:48:08.645451   17401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:48:08.645504   17401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:48:08.682129   17401 cri.go:89] found id: ""
	I1009 18:48:08.682207   17401 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:48:08.692654   17401 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:48:08.704954   17401 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:48:08.718382   17401 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:48:08.718413   17401 kubeadm.go:157] found existing configuration files:
	
	I1009 18:48:08.718468   17401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:48:08.728030   17401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:48:08.728096   17401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:48:08.738202   17401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:48:08.747870   17401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:48:08.747937   17401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:48:08.758405   17401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:48:08.767746   17401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:48:08.767815   17401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:48:08.777291   17401 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:48:08.786050   17401 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:48:08.786104   17401 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:48:08.795246   17401 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 18:48:08.844215   17401 kubeadm.go:310] W1009 18:48:08.827413     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:48:08.845699   17401 kubeadm.go:310] W1009 18:48:08.828967     812 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:48:08.950491   17401 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:48:19.199053   17401 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 18:48:19.199172   17401 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 18:48:19.199289   17401 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:48:19.199432   17401 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:48:19.199571   17401 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:48:19.199666   17401 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:48:19.201344   17401 out.go:235]   - Generating certificates and keys ...
	I1009 18:48:19.201446   17401 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 18:48:19.201520   17401 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 18:48:19.201608   17401 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:48:19.201669   17401 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:48:19.201751   17401 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:48:19.201802   17401 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 18:48:19.201848   17401 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 18:48:19.202008   17401 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-421083 localhost] and IPs [192.168.39.156 127.0.0.1 ::1]
	I1009 18:48:19.202073   17401 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 18:48:19.202231   17401 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-421083 localhost] and IPs [192.168.39.156 127.0.0.1 ::1]
	I1009 18:48:19.202314   17401 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:48:19.202368   17401 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:48:19.202408   17401 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 18:48:19.202461   17401 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:48:19.202520   17401 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:48:19.202576   17401 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:48:19.202642   17401 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:48:19.202732   17401 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:48:19.202808   17401 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:48:19.202917   17401 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:48:19.203006   17401 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:48:19.204545   17401 out.go:235]   - Booting up control plane ...
	I1009 18:48:19.204657   17401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:48:19.204757   17401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:48:19.204849   17401 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:48:19.204997   17401 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:48:19.205141   17401 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:48:19.205204   17401 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 18:48:19.205375   17401 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:48:19.205501   17401 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:48:19.205556   17401 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001164201s
	I1009 18:48:19.205634   17401 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 18:48:19.205700   17401 kubeadm.go:310] [api-check] The API server is healthy after 4.502485036s
	I1009 18:48:19.205799   17401 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:48:19.205933   17401 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:48:19.206020   17401 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:48:19.206378   17401 kubeadm.go:310] [mark-control-plane] Marking the node addons-421083 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:48:19.206514   17401 kubeadm.go:310] [bootstrap-token] Using token: g5juxz.ri7598v7sv8u8xm3
	I1009 18:48:19.207850   17401 out.go:235]   - Configuring RBAC rules ...
	I1009 18:48:19.207953   17401 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:48:19.208025   17401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:48:19.208143   17401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:48:19.208271   17401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:48:19.208371   17401 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:48:19.208445   17401 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:48:19.208562   17401 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:48:19.208619   17401 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 18:48:19.208667   17401 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 18:48:19.208673   17401 kubeadm.go:310] 
	I1009 18:48:19.208725   17401 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 18:48:19.208730   17401 kubeadm.go:310] 
	I1009 18:48:19.208811   17401 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 18:48:19.208820   17401 kubeadm.go:310] 
	I1009 18:48:19.208848   17401 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 18:48:19.208899   17401 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:48:19.208942   17401 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:48:19.208948   17401 kubeadm.go:310] 
	I1009 18:48:19.209010   17401 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 18:48:19.209019   17401 kubeadm.go:310] 
	I1009 18:48:19.209070   17401 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:48:19.209077   17401 kubeadm.go:310] 
	I1009 18:48:19.209123   17401 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 18:48:19.209213   17401 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:48:19.209301   17401 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:48:19.209309   17401 kubeadm.go:310] 
	I1009 18:48:19.209377   17401 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:48:19.209444   17401 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 18:48:19.209450   17401 kubeadm.go:310] 
	I1009 18:48:19.209537   17401 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g5juxz.ri7598v7sv8u8xm3 \
	I1009 18:48:19.209638   17401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 18:48:19.209661   17401 kubeadm.go:310] 	--control-plane 
	I1009 18:48:19.209668   17401 kubeadm.go:310] 
	I1009 18:48:19.209787   17401 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:48:19.209796   17401 kubeadm.go:310] 
	I1009 18:48:19.209876   17401 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g5juxz.ri7598v7sv8u8xm3 \
	I1009 18:48:19.209977   17401 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 18:48:19.209989   17401 cni.go:84] Creating CNI manager for ""
	I1009 18:48:19.209998   17401 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:48:19.211426   17401 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 18:48:19.212648   17401 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 18:48:19.223680   17401 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 18:48:19.242617   17401 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:48:19.242731   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:19.242762   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-421083 minikube.k8s.io/updated_at=2024_10_09T18_48_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=addons-421083 minikube.k8s.io/primary=true
	I1009 18:48:19.353738   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:19.353738   17401 ops.go:34] apiserver oom_adj: -16
	I1009 18:48:19.854521   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:20.353977   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:20.854545   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:21.354805   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:21.854008   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:22.354336   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:22.854597   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:23.354619   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:23.854652   17401 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:48:23.983308   17401 kubeadm.go:1113] duration metric: took 4.740633863s to wait for elevateKubeSystemPrivileges
	I1009 18:48:23.983345   17401 kubeadm.go:394] duration metric: took 15.337989506s to StartCluster
	I1009 18:48:23.983369   17401 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:23.983500   17401 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 18:48:23.983994   17401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:48:23.984233   17401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:48:23.984259   17401 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:48:23.984323   17401 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:48:23.984478   17401 addons.go:69] Setting yakd=true in profile "addons-421083"
	I1009 18:48:23.984491   17401 config.go:182] Loaded profile config "addons-421083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:48:23.984496   17401 addons.go:69] Setting inspektor-gadget=true in profile "addons-421083"
	I1009 18:48:23.984545   17401 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-421083"
	I1009 18:48:23.984554   17401 addons.go:69] Setting ingress-dns=true in profile "addons-421083"
	I1009 18:48:23.984561   17401 addons.go:234] Setting addon inspektor-gadget=true in "addons-421083"
	I1009 18:48:23.984570   17401 addons.go:234] Setting addon ingress-dns=true in "addons-421083"
	I1009 18:48:23.984572   17401 addons.go:69] Setting registry=true in profile "addons-421083"
	I1009 18:48:23.984580   17401 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-421083"
	I1009 18:48:23.984563   17401 addons.go:69] Setting cloud-spanner=true in profile "addons-421083"
	I1009 18:48:23.984587   17401 addons.go:234] Setting addon registry=true in "addons-421083"
	I1009 18:48:23.984595   17401 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-421083"
	I1009 18:48:23.984602   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984607   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984608   17401 addons.go:234] Setting addon cloud-spanner=true in "addons-421083"
	I1009 18:48:23.984618   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984629   17401 addons.go:69] Setting metrics-server=true in profile "addons-421083"
	I1009 18:48:23.984633   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984634   17401 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-421083"
	I1009 18:48:23.984640   17401 addons.go:234] Setting addon metrics-server=true in "addons-421083"
	I1009 18:48:23.984657   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984660   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984505   17401 addons.go:234] Setting addon yakd=true in "addons-421083"
	I1009 18:48:23.985067   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984552   17401 addons.go:69] Setting storage-provisioner=true in profile "addons-421083"
	I1009 18:48:23.985076   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985080   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985093   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.984619   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.985109   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985120   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.985124   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985141   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.985145   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.984515   17401 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-421083"
	I1009 18:48:23.985202   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.985221   17401 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-421083"
	I1009 18:48:23.984533   17401 addons.go:69] Setting volumesnapshots=true in profile "addons-421083"
	I1009 18:48:23.985717   17401 addons.go:234] Setting addon volumesnapshots=true in "addons-421083"
	I1009 18:48:23.985746   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.985841   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985874   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.986018   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.985113   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.984517   17401 addons.go:69] Setting default-storageclass=true in profile "addons-421083"
	I1009 18:48:23.986192   17401 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-421083"
	I1009 18:48:23.986700   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.986748   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.984544   17401 addons.go:69] Setting gcp-auth=true in profile "addons-421083"
	I1009 18:48:23.987027   17401 mustload.go:65] Loading cluster: addons-421083
	I1009 18:48:23.987285   17401 config.go:182] Loaded profile config "addons-421083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:48:23.987753   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.987807   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.989642   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.989687   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.992145   17401 out.go:177] * Verifying Kubernetes components...
	I1009 18:48:23.984524   17401 addons.go:69] Setting volcano=true in profile "addons-421083"
	I1009 18:48:23.992646   17401 addons.go:234] Setting addon volcano=true in "addons-421083"
	I1009 18:48:23.992678   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.984539   17401 addons.go:69] Setting ingress=true in profile "addons-421083"
	I1009 18:48:23.993968   17401 addons.go:234] Setting addon ingress=true in "addons-421083"
	I1009 18:48:23.994006   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.986041   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:23.985086   17401 addons.go:234] Setting addon storage-provisioner=true in "addons-421083"
	I1009 18:48:23.997102   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:23.997395   17401 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:48:23.997852   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:23.997894   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.004919   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36259
	I1009 18:48:24.005223   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36661
	I1009 18:48:24.005452   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.005654   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.005752   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.011273   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36939
	I1009 18:48:24.011418   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.011467   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.011611   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.012001   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.012356   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.012381   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.012850   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.014603   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42353
	I1009 18:48:24.023502   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1009 18:48:24.023530   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36427
	I1009 18:48:24.024069   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024079   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024110   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024114   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024175   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024213   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024498   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024504   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.024542   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024556   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.024727   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I1009 18:48:24.024999   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.025075   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.025117   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.025235   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.025931   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.026303   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.026727   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.026753   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.027186   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.027234   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.027336   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.027264   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.027690   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.027847   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.028057   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.028435   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.028456   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.028767   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.029591   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.029637   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.029825   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.029855   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.039874   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.039978   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.040084   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.040128   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.059856   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34695
	I1009 18:48:24.060001   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44811
	I1009 18:48:24.060199   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.060611   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.060640   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.061016   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.061024   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.061497   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I1009 18:48:24.061552   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.061568   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.061894   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.061957   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.062420   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.062436   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.062530   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.062548   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.062966   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.063014   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.065373   17401 addons.go:234] Setting addon default-storageclass=true in "addons-421083"
	I1009 18:48:24.065409   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:24.065781   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.065812   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.065989   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
	I1009 18:48:24.066043   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40307
	I1009 18:48:24.066107   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I1009 18:48:24.066161   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:24.066187   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.066229   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.066492   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.066509   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.066546   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.066843   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.066875   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.068515   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.068590   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.068643   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.068789   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.068802   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.069453   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.069472   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.069547   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.069896   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.069911   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.070349   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.070376   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.070576   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.070666   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41303
	I1009 18:48:24.070854   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.070983   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.071808   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.071826   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.072250   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.072886   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.072922   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.072986   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.073066   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.073105   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42877
	I1009 18:48:24.073181   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.074902   17401 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1009 18:48:24.075762   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I1009 18:48:24.075872   17401 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-421083"
	I1009 18:48:24.075912   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:24.076275   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.076317   17401 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1009 18:48:24.076740   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36325
	I1009 18:48:24.076320   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.076546   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.077115   17401 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:48:24.077132   17401 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:48:24.077151   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.077879   17401 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:48:24.077895   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1009 18:48:24.077912   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.078212   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45675
	I1009 18:48:24.078650   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.078759   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.078766   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.079198   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.079214   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.079599   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.079780   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.080571   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.080759   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.082079   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.083174   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.083498   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.083736   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.083756   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.084054   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.084068   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.084121   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.084170   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.084183   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.084325   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.084623   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.084700   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.084787   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.085015   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.085188   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.085528   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.085776   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.087849   17401 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1009 18:48:24.089181   17401 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:48:24.089198   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:48:24.089215   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.090477   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.090630   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.092042   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.092104   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.092611   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.092681   17401 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1009 18:48:24.092979   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.093000   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.093150   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.093337   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.093545   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.093679   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.094458   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38339
	I1009 18:48:24.094564   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34247
	I1009 18:48:24.095018   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.095360   17401 out.go:177]   - Using image docker.io/registry:2.8.3
	I1009 18:48:24.095556   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.095627   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.095643   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.095644   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33881
	I1009 18:48:24.096196   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.096394   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.096616   17401 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:48:24.096652   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:48:24.096671   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.098424   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I1009 18:48:24.098714   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.100308   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.100400   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.100420   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.100437   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.100484   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.100500   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.100571   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.100582   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.100626   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.100632   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.100696   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.101488   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.101511   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.101575   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.101658   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.101698   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.101709   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.101836   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.101886   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.101928   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.102171   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.102642   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.102659   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.102804   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.102838   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.103750   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.104669   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.104702   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.105664   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.107515   17401 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1009 18:48:24.109023   17401 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:48:24.109042   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:48:24.109061   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.109434   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44093
	I1009 18:48:24.109886   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.110385   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.110407   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.110853   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.111047   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.112498   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.112715   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.113091   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.113118   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.113290   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.113485   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.114426   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 18:48:24.115502   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:48:24.115516   17401 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:48:24.115542   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.115608   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I1009 18:48:24.116092   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45857
	I1009 18:48:24.116565   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.116738   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.116860   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.117128   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.117287   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.117299   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.117792   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.117809   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.118511   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.118522   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.119100   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:24.119141   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:24.119384   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36947
	I1009 18:48:24.119500   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.119513   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.119550   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.119643   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.119679   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.119818   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.119872   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.119946   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.119960   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.120310   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.120332   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.120899   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.121036   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.121899   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.123845   17401 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1009 18:48:24.124514   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38171
	I1009 18:48:24.124645   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.125574   17401 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1009 18:48:24.125591   17401 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1009 18:48:24.125609   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.125671   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.126363   17401 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:48:24.127491   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:48:24.127507   17401 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:48:24.127528   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.130230   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.130487   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.130754   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.130771   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.130860   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.130874   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.131089   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.131148   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.131230   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.131270   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.131379   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.131417   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.131533   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.131901   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.131914   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.132114   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.132616   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.132817   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.134345   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.135128   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42793
	I1009 18:48:24.135279   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I1009 18:48:24.135650   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.136119   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.136143   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.136362   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:48:24.136526   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.136694   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.136968   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.137087   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1009 18:48:24.137546   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.138002   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.138026   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.138219   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.138353   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.138370   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.138451   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.138731   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.138730   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.138937   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.139027   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:48:24.139822   17401 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:48:24.140445   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.140666   17401 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:48:24.140684   17401 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:48:24.140700   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.140799   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.141814   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:24.141838   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:24.141876   17401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:48:24.141888   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:48:24.141899   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.142249   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:24.142263   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:24.142275   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:24.142283   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:24.142290   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:24.142469   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:24.142480   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	W1009 18:48:24.142544   17401 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:48:24.142571   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:48:24.143972   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:48:24.144192   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.144607   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.144635   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.144764   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.144927   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.145105   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.145259   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.145933   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I1009 18:48:24.146298   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:48:24.146512   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.146612   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.146934   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.146952   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.146999   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.147018   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.147251   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.147360   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.147399   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.147535   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.147541   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.147699   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.148809   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:48:24.149343   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.149665   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1009 18:48:24.150040   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:24.150550   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:24.150573   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:24.150822   17401 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:48:24.151119   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:24.151442   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:24.151723   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:48:24.152913   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:24.153271   17401 out.go:177]   - Using image docker.io/busybox:stable
	I1009 18:48:24.154416   17401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1009 18:48:24.154457   17401 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:48:24.154637   17401 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:48:24.154660   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:48:24.154676   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.155982   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:48:24.156000   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:48:24.156020   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.157252   17401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:48:24.157792   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.158352   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.158380   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.158525   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.158720   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.158863   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.158994   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.159539   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.159689   17401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:48:24.160157   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.160180   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.160446   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.160589   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.160753   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.160902   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:24.161216   17401 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:48:24.161232   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:48:24.161243   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:24.164236   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.164663   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:24.164682   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:24.164796   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:24.164927   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:24.165067   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:24.165173   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	W1009 18:48:24.171028   17401 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56188->192.168.39.156:22: read: connection reset by peer
	I1009 18:48:24.171058   17401 retry.go:31] will retry after 200.986757ms: ssh: handshake failed: read tcp 192.168.39.1:56188->192.168.39.156:22: read: connection reset by peer
	I1009 18:48:24.387734   17401 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:48:24.387747   17401 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:48:24.456892   17401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:48:24.456922   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:48:24.496734   17401 node_ready.go:35] waiting up to 6m0s for node "addons-421083" to be "Ready" ...
	I1009 18:48:24.499176   17401 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1009 18:48:24.499201   17401 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1009 18:48:24.499806   17401 node_ready.go:49] node "addons-421083" has status "Ready":"True"
	I1009 18:48:24.499825   17401 node_ready.go:38] duration metric: took 3.05637ms for node "addons-421083" to be "Ready" ...
	I1009 18:48:24.499833   17401 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:48:24.511085   17401 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:24.572143   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:48:24.646077   17401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:48:24.646105   17401 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:48:24.648276   17401 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:48:24.648296   17401 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:48:24.681555   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:48:24.697573   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:48:24.698686   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:48:24.698708   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:48:24.699608   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:48:24.732308   17401 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:48:24.732334   17401 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:48:24.734560   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:48:24.744474   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:48:24.755661   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:48:24.755682   17401 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:48:24.772242   17401 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1009 18:48:24.772272   17401 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1009 18:48:24.809704   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:48:24.836466   17401 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:48:24.836494   17401 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:48:24.859641   17401 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:48:24.859659   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:48:24.871605   17401 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:48:24.871638   17401 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:48:24.959270   17401 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1009 18:48:24.959295   17401 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1009 18:48:24.960292   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:48:24.960313   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:48:24.972785   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:48:24.972808   17401 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:48:24.993079   17401 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:48:24.993103   17401 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:48:25.087596   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:48:25.116437   17401 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1009 18:48:25.116462   17401 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1009 18:48:25.158627   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:48:25.238128   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:48:25.238157   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:48:25.258350   17401 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1009 18:48:25.258373   17401 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1009 18:48:25.262133   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:48:25.262158   17401 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:48:25.272123   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:48:25.272145   17401 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:48:25.453991   17401 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:48:25.454012   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:48:25.517612   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:48:25.517639   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:48:25.527782   17401 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:48:25.527803   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:48:25.595289   17401 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1009 18:48:25.595317   17401 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1009 18:48:25.742797   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:48:25.786098   17401 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:48:25.786124   17401 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:48:25.851056   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:48:25.919100   17401 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:48:25.919127   17401 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1009 18:48:26.145587   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:48:26.145610   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:48:26.226737   17401 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.838961385s)
	I1009 18:48:26.226765   17401 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1009 18:48:26.330221   17401 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:48:26.330240   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1009 18:48:26.452987   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:48:26.453015   17401 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:48:26.519131   17401 pod_ready.go:103] pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:26.580284   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:48:26.718762   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:48:26.718783   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:48:26.739031   17401 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-421083" context rescaled to 1 replicas
	I1009 18:48:27.035235   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:48:27.035257   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:48:27.327859   17401 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:48:27.327886   17401 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:48:27.700854   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:48:28.554767   17401 pod_ready.go:103] pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:28.623975   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.051801004s)
	I1009 18:48:28.624031   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:28.624043   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:28.624429   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:28.624458   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:28.624469   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:28.624477   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:28.624433   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:28.624743   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:28.624793   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:28.624802   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015110   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.333520889s)
	I1009 18:48:29.015154   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015166   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015202   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.317598584s)
	I1009 18:48:29.015244   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015259   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015267   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.315639322s)
	I1009 18:48:29.015289   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015296   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015633   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015640   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.015657   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.015658   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015642   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.015666   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015664   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015673   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.015675   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015682   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.015688   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.015878   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015903   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.015910   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.015948   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.015972   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.015981   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.016044   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.016062   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.016078   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.016085   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.017435   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:29.017461   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.017478   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.093136   17401 pod_ready.go:93] pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:29.093159   17401 pod_ready.go:82] duration metric: took 4.582043559s for pod "coredns-7c65d6cfc9-7nvgj" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:29.093169   17401 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:29.205899   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:29.205916   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:29.206171   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:29.206219   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:29.206256   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:31.113763   17401 pod_ready.go:103] pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:31.128056   17401 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:48:31.128090   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:31.131526   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:31.132070   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:31.132099   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:31.132301   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:31.132488   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:31.132642   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:31.132775   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:31.433634   17401 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:48:31.495545   17401 addons.go:234] Setting addon gcp-auth=true in "addons-421083"
	I1009 18:48:31.495634   17401 host.go:66] Checking if "addons-421083" exists ...
	I1009 18:48:31.496075   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:31.496124   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:31.511322   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43883
	I1009 18:48:31.511734   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:31.512242   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:31.512266   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:31.512597   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:31.513067   17401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 18:48:31.513091   17401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 18:48:31.527440   17401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I1009 18:48:31.527916   17401 main.go:141] libmachine: () Calling .GetVersion
	I1009 18:48:31.528406   17401 main.go:141] libmachine: Using API Version  1
	I1009 18:48:31.528431   17401 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 18:48:31.528722   17401 main.go:141] libmachine: () Calling .GetMachineName
	I1009 18:48:31.528953   17401 main.go:141] libmachine: (addons-421083) Calling .GetState
	I1009 18:48:31.530508   17401 main.go:141] libmachine: (addons-421083) Calling .DriverName
	I1009 18:48:31.530711   17401 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:48:31.530735   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHHostname
	I1009 18:48:31.534086   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:31.534511   17401 main.go:141] libmachine: (addons-421083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:f5:45", ip: ""} in network mk-addons-421083: {Iface:virbr1 ExpiryTime:2024-10-09 19:47:52 +0000 UTC Type:0 Mac:52:54:00:90:f5:45 Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:addons-421083 Clientid:01:52:54:00:90:f5:45}
	I1009 18:48:31.534541   17401 main.go:141] libmachine: (addons-421083) DBG | domain addons-421083 has defined IP address 192.168.39.156 and MAC address 52:54:00:90:f5:45 in network mk-addons-421083
	I1009 18:48:31.534729   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHPort
	I1009 18:48:31.534890   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHKeyPath
	I1009 18:48:31.535076   17401 main.go:141] libmachine: (addons-421083) Calling .GetSSHUsername
	I1009 18:48:31.535258   17401 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/addons-421083/id_rsa Username:docker}
	I1009 18:48:32.025573   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.290977216s)
	I1009 18:48:32.025637   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025651   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025647   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.281132825s)
	I1009 18:48:32.025687   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025705   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025722   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.938101476s)
	I1009 18:48:32.025692   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.215959598s)
	I1009 18:48:32.025773   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025787   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025821   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.867163386s)
	I1009 18:48:32.025749   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025837   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025842   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.025853   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.025952   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.283123896s)
	I1009 18:48:32.025986   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.025999   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026008   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026015   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026022   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.174931301s)
	I1009 18:48:32.026040   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026050   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	W1009 18:48:32.025985   17401 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:48:32.026078   17401 retry.go:31] will retry after 291.827465ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:48:32.026153   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.44584113s)
	I1009 18:48:32.026169   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026178   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026184   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026187   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026194   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026195   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026199   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026156   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026214   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026223   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026232   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026239   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026200   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026272   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026282   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026289   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026202   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026313   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026401   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026423   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026431   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026438   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026439   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026443   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026465   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026471   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026478   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.026483   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.026861   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.026889   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.026895   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.026906   17401 addons.go:475] Verifying addon metrics-server=true in "addons-421083"
	I1009 18:48:32.028309   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.028335   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.028342   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.028350   17401 addons.go:475] Verifying addon ingress=true in "addons-421083"
	I1009 18:48:32.028585   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.028593   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.028645   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.028655   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.028664   17401 addons.go:475] Verifying addon registry=true in "addons-421083"
	I1009 18:48:32.028669   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.028677   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.028748   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:32.028768   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.030391   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.030411   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.030419   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.030630   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.030657   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.030895   17401 out.go:177] * Verifying ingress addon...
	I1009 18:48:32.031989   17401 out.go:177] * Verifying registry addon...
	I1009 18:48:32.031990   17401 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-421083 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:48:32.033679   17401 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:48:32.038106   17401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:48:32.164704   17401 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:48:32.164726   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:32.164948   17401 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:48:32.164967   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:32.237713   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:32.237740   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:32.238051   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:32.238070   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:32.318541   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:48:32.667558   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:32.668179   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.048787   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.049940   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:33.116710   17401 pod_ready.go:103] pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:33.576325   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:33.576686   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:33.586812   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.885920713s)
	I1009 18:48:33.586864   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:33.586875   17401 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.056142547s)
	I1009 18:48:33.586882   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:33.587347   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:33.587380   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:33.587394   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:33.587400   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:33.587655   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:33.587694   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:33.587705   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:33.587715   17401 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-421083"
	I1009 18:48:33.589013   17401 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:48:33.590036   17401 out.go:177] * Verifying csi-hostpath-driver addon...
	I1009 18:48:33.591999   17401 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1009 18:48:33.592724   17401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:48:33.593794   17401 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:48:33.593817   17401 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:48:33.640391   17401 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:48:33.640418   17401 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:48:33.643274   17401 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:48:33.643292   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:33.712677   17401 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:48:33.712708   17401 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:48:33.798343   17401 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:48:34.037807   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.044499   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:34.098941   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:34.313456   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.994855339s)
	I1009 18:48:34.313519   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:34.313540   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:34.313812   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:34.313849   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:34.313868   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:34.313881   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:34.313892   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:34.314177   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:34.314188   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:34.546371   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:34.546782   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:34.601912   17401 pod_ready.go:98] pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.156 HostIPs:[{IP:192.168.39
.156}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-09 18:48:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-09 18:48:28 +0000 UTC,FinishedAt:2024-10-09 18:48:33 +0000 UTC,ContainerID:cri-o://f5e123bf4f607312fa33f1a460d4653a659f49d9da94fd4c8208a1e961f47868,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f5e123bf4f607312fa33f1a460d4653a659f49d9da94fd4c8208a1e961f47868 Started:0xc001b13080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0016828a0} {Name:kube-api-access-2lggz MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0016828b0}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1009 18:48:34.601936   17401 pod_ready.go:82] duration metric: took 5.508761994s for pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace to be "Ready" ...
	E1009 18:48:34.601946   17401 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-fvwmm" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-10-09 18:48:23 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.156 HostIPs:[{IP:192.168.39.156}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-10-09 18:48:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-10-09 18:48:28 +0000 UTC,FinishedAt:2024-10-09 18:48:33 +0000 UTC,ContainerID:cri-o://f5e123bf4f607312fa33f1a460d4653a659f49d9da94fd4c8208a1e961f47868,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://f5e123bf4f607312fa33f1a460d4653a659f49d9da94fd4c8208a1e961f47868 Started:0xc001b13080 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0016828a0} {Name:kube-api-access-2lggz MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc0016828b0}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1009 18:48:34.601956   17401 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.608414   17401 pod_ready.go:93] pod "etcd-addons-421083" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:34.608444   17401 pod_ready.go:82] duration metric: took 6.476297ms for pod "etcd-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.608458   17401 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.618045   17401 pod_ready.go:93] pod "kube-apiserver-addons-421083" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:34.618073   17401 pod_ready.go:82] duration metric: took 9.606049ms for pod "kube-apiserver-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.618085   17401 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.624712   17401 pod_ready.go:93] pod "kube-controller-manager-addons-421083" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:34.624739   17401 pod_ready.go:82] duration metric: took 6.645765ms for pod "kube-controller-manager-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.624750   17401 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98lbc" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.630896   17401 pod_ready.go:93] pod "kube-proxy-98lbc" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:34.630920   17401 pod_ready.go:82] duration metric: took 6.162418ms for pod "kube-proxy-98lbc" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.630932   17401 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:34.646945   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.014805   17401 pod_ready.go:93] pod "kube-scheduler-addons-421083" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:35.014827   17401 pod_ready.go:82] duration metric: took 383.888267ms for pod "kube-scheduler-addons-421083" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:35.014836   17401 pod_ready.go:39] duration metric: took 10.514987687s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:48:35.014851   17401 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:48:35.014896   17401 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:48:35.049792   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.070107   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:35.131734   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.214085   17401 api_server.go:72] duration metric: took 11.229784943s to wait for apiserver process to appear ...
	I1009 18:48:35.214114   17401 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:48:35.214138   17401 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I1009 18:48:35.216474   17401 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.418096274s)
	I1009 18:48:35.216510   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:35.216522   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:35.216824   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:35.216867   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:35.216878   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:35.216890   17401 main.go:141] libmachine: Making call to close driver server
	I1009 18:48:35.216898   17401 main.go:141] libmachine: (addons-421083) Calling .Close
	I1009 18:48:35.217135   17401 main.go:141] libmachine: Successfully made call to close driver server
	I1009 18:48:35.217148   17401 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 18:48:35.217150   17401 main.go:141] libmachine: (addons-421083) DBG | Closing plugin on server side
	I1009 18:48:35.218148   17401 addons.go:475] Verifying addon gcp-auth=true in "addons-421083"
	I1009 18:48:35.219896   17401 out.go:177] * Verifying gcp-auth addon...
	I1009 18:48:35.222417   17401 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:48:35.237229   17401 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I1009 18:48:35.243892   17401 api_server.go:141] control plane version: v1.31.1
	I1009 18:48:35.243917   17401 api_server.go:131] duration metric: took 29.797275ms to wait for apiserver health ...
	I1009 18:48:35.243926   17401 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:48:35.264219   17401 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:48:35.264245   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:35.286370   17401 system_pods.go:59] 18 kube-system pods found
	I1009 18:48:35.286409   17401 system_pods.go:61] "coredns-7c65d6cfc9-7nvgj" [b3ca0959-36fb-4d13-89c0-435f4fde16f8] Running
	I1009 18:48:35.286420   17401 system_pods.go:61] "coredns-7c65d6cfc9-fvwmm" [bad6872d-f55e-4622-b3ac-fb96784b9b65] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
	I1009 18:48:35.286430   17401 system_pods.go:61] "csi-hostpath-attacher-0" [e2b2f817-c253-49b6-8345-271857327ef0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:48:35.286438   17401 system_pods.go:61] "csi-hostpath-resizer-0" [ddf25048-aab5-4cbc-bfec-8219363e5c69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:48:35.286449   17401 system_pods.go:61] "csi-hostpathplugin-m7lz5" [c05bb7d7-3592-48d1-85d1-b361a68e79aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:48:35.286455   17401 system_pods.go:61] "etcd-addons-421083" [d3c4522c-c7fb-4ad2-8000-383016f601e5] Running
	I1009 18:48:35.286460   17401 system_pods.go:61] "kube-apiserver-addons-421083" [6082264c-0805-4790-8796-9ce439e9b3b4] Running
	I1009 18:48:35.286466   17401 system_pods.go:61] "kube-controller-manager-addons-421083" [45ee9bad-9652-46b5-b70d-12cfd491365b] Running
	I1009 18:48:35.286479   17401 system_pods.go:61] "kube-ingress-dns-minikube" [1f1cb904-3c3e-4c50-b15f-385022869b8e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:48:35.286493   17401 system_pods.go:61] "kube-proxy-98lbc" [6a26ad94-5c33-40db-8a42-9e11d3523806] Running
	I1009 18:48:35.286502   17401 system_pods.go:61] "kube-scheduler-addons-421083" [81120780-6ded-4417-9df7-67be5fef6826] Running
	I1009 18:48:35.286510   17401 system_pods.go:61] "metrics-server-84c5f94fbc-4s5xq" [cd71806c-0308-466b-917f-085718fee448] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:48:35.286535   17401 system_pods.go:61] "nvidia-device-plugin-daemonset-4k6f6" [c45cd383-1866-4787-a24e-bac7c6eb0863] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:48:35.286547   17401 system_pods.go:61] "registry-66c9cd494c-f92jv" [98955600-7b10-44b3-ac78-eff396b2c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:48:35.286556   17401 system_pods.go:61] "registry-proxy-x986l" [f7e67133-eaf2-4276-8331-d8dd8cbf0c4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:48:35.286567   17401 system_pods.go:61] "snapshot-controller-56fcc65765-4dht5" [0953230e-3a9a-494e-97c6-faef913aa115] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:48:35.286577   17401 system_pods.go:61] "snapshot-controller-56fcc65765-lshkr" [735e0cc5-1a6f-41e8-adfa-beaaee6751d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:48:35.286582   17401 system_pods.go:61] "storage-provisioner" [c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba] Running
	I1009 18:48:35.286594   17401 system_pods.go:74] duration metric: took 42.661688ms to wait for pod list to return data ...
	I1009 18:48:35.286607   17401 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:48:35.403443   17401 default_sa.go:45] found service account: "default"
	I1009 18:48:35.403474   17401 default_sa.go:55] duration metric: took 116.856615ms for default service account to be created ...
	I1009 18:48:35.403496   17401 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:48:35.538414   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:35.541140   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:35.642478   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:35.645183   17401 system_pods.go:86] 17 kube-system pods found
	I1009 18:48:35.645214   17401 system_pods.go:89] "coredns-7c65d6cfc9-7nvgj" [b3ca0959-36fb-4d13-89c0-435f4fde16f8] Running
	I1009 18:48:35.645226   17401 system_pods.go:89] "csi-hostpath-attacher-0" [e2b2f817-c253-49b6-8345-271857327ef0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1009 18:48:35.645237   17401 system_pods.go:89] "csi-hostpath-resizer-0" [ddf25048-aab5-4cbc-bfec-8219363e5c69] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1009 18:48:35.645252   17401 system_pods.go:89] "csi-hostpathplugin-m7lz5" [c05bb7d7-3592-48d1-85d1-b361a68e79aa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1009 18:48:35.645258   17401 system_pods.go:89] "etcd-addons-421083" [d3c4522c-c7fb-4ad2-8000-383016f601e5] Running
	I1009 18:48:35.645265   17401 system_pods.go:89] "kube-apiserver-addons-421083" [6082264c-0805-4790-8796-9ce439e9b3b4] Running
	I1009 18:48:35.645272   17401 system_pods.go:89] "kube-controller-manager-addons-421083" [45ee9bad-9652-46b5-b70d-12cfd491365b] Running
	I1009 18:48:35.645280   17401 system_pods.go:89] "kube-ingress-dns-minikube" [1f1cb904-3c3e-4c50-b15f-385022869b8e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 18:48:35.645289   17401 system_pods.go:89] "kube-proxy-98lbc" [6a26ad94-5c33-40db-8a42-9e11d3523806] Running
	I1009 18:48:35.645296   17401 system_pods.go:89] "kube-scheduler-addons-421083" [81120780-6ded-4417-9df7-67be5fef6826] Running
	I1009 18:48:35.645307   17401 system_pods.go:89] "metrics-server-84c5f94fbc-4s5xq" [cd71806c-0308-466b-917f-085718fee448] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 18:48:35.645316   17401 system_pods.go:89] "nvidia-device-plugin-daemonset-4k6f6" [c45cd383-1866-4787-a24e-bac7c6eb0863] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1009 18:48:35.645327   17401 system_pods.go:89] "registry-66c9cd494c-f92jv" [98955600-7b10-44b3-ac78-eff396b2c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1009 18:48:35.645335   17401 system_pods.go:89] "registry-proxy-x986l" [f7e67133-eaf2-4276-8331-d8dd8cbf0c4a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1009 18:48:35.645345   17401 system_pods.go:89] "snapshot-controller-56fcc65765-4dht5" [0953230e-3a9a-494e-97c6-faef913aa115] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:48:35.645357   17401 system_pods.go:89] "snapshot-controller-56fcc65765-lshkr" [735e0cc5-1a6f-41e8-adfa-beaaee6751d3] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1009 18:48:35.645362   17401 system_pods.go:89] "storage-provisioner" [c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba] Running
	I1009 18:48:35.645377   17401 system_pods.go:126] duration metric: took 241.871798ms to wait for k8s-apps to be running ...
	I1009 18:48:35.645389   17401 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:48:35.645446   17401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:48:35.688742   17401 system_svc.go:56] duration metric: took 43.344542ms WaitForService to wait for kubelet
	I1009 18:48:35.688773   17401 kubeadm.go:582] duration metric: took 11.704478846s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:48:35.688790   17401 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:48:35.725862   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:35.797044   17401 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 18:48:35.797067   17401 node_conditions.go:123] node cpu capacity is 2
	I1009 18:48:35.797078   17401 node_conditions.go:105] duration metric: took 108.283571ms to run NodePressure ...
	I1009 18:48:35.797088   17401 start.go:241] waiting for startup goroutines ...
	I1009 18:48:36.038811   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.042255   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:36.140907   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:36.225891   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:36.538883   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:36.542927   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:36.598373   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:36.729417   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:37.038253   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.041966   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:37.098230   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:37.226290   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:37.538487   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:37.541789   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:37.598048   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:37.726481   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:38.038469   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.041485   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:38.097529   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:38.225898   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:38.538763   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:38.541697   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:38.598097   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:38.726422   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:39.337135   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:39.337500   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.338067   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:39.339560   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:39.537928   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:39.542262   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:39.598564   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:39.726527   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:40.038353   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.041308   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:40.097650   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:40.225337   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:40.539195   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:40.542143   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:40.597921   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:40.725924   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:41.037667   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.041864   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:41.097346   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:41.225322   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:41.538959   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:41.542081   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:41.597874   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:41.726272   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:42.038753   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.041623   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:42.097323   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:42.225272   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:42.538484   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:42.541056   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:42.640421   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:42.725859   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:43.037803   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.041140   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:43.098983   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:43.225799   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:43.541529   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:43.543436   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:43.598156   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:43.725688   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:44.038742   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.041150   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:44.097549   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:44.226769   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:44.538172   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:44.540777   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:44.598049   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:44.726623   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:45.038762   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:45.041603   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:45.097214   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:45.225398   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:45.539142   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:45.541703   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:45.597852   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:45.725572   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:46.039192   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:46.040922   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:46.097445   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:46.225818   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:46.537754   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:46.541770   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:46.597319   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:46.725759   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:47.039362   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:47.041249   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:47.096841   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:47.226206   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:47.538337   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:47.541511   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:47.597941   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:47.726483   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:48.038503   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:48.041274   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:48.097338   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:48.225594   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:48.538508   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:48.540981   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:48.597331   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:48.725468   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:49.038710   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:49.042163   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:49.097924   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:49.226245   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:49.538988   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:49.541412   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:49.596888   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:49.726081   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:50.038243   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:50.041195   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:50.096807   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:50.226394   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:50.539781   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:50.541529   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:50.597529   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:50.725593   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:51.038645   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:51.041226   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:51.097467   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:51.226045   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:51.538939   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:51.541528   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:51.597520   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:51.726761   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:52.038739   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:52.041293   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:52.097555   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:52.226733   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:52.537867   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:52.540618   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:52.596960   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:52.726613   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:53.042273   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:53.042936   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:53.106610   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:53.230069   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:53.538016   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:53.541325   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:53.597065   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:53.725839   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:54.039302   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:54.041204   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:54.097819   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:54.226623   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:54.539225   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:54.541388   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:54.597116   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:54.727827   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:55.037790   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:55.041871   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:55.097658   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:55.225684   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:55.538858   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:55.540591   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:55.597177   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:55.725299   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:56.038274   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:56.040941   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:56.097908   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:56.226257   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:56.538658   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:56.541533   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:56.597187   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:56.726312   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:57.037608   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:57.042062   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:57.097786   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:57.225923   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:57.537840   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:57.540791   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:57.597030   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:57.726663   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:58.038445   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:58.041005   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:58.097603   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:58.225893   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:58.672685   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:58.673025   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:58.673515   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:58.726061   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:59.038257   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:59.044075   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:59.097970   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:59.226072   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:59.537939   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:59.541434   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:59.596839   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:59.727174   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:00.037844   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:00.041204   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:00.097883   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:00.227188   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:00.539325   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:00.540601   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:00.645122   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:00.726394   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:01.038807   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:01.042322   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:01.096810   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:01.226195   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:01.538222   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:01.541108   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:01.603587   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:01.726736   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.038468   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:02.041992   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:02.098096   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:02.226277   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:02.538050   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:02.541156   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:02.597276   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:02.725792   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.038320   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:03.041308   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:03.097777   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:03.226962   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:03.538316   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:03.541384   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:03.597628   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:03.726083   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.038339   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:04.041045   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:04.097917   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:04.226725   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:04.538509   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:04.541594   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:04.597247   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:04.725836   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.037583   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:05.041567   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:05.098678   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:05.225424   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:05.538384   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:05.541817   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:49:05.597839   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:05.726567   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.038319   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:06.041125   17401 kapi.go:107] duration metric: took 34.00301838s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:49:06.097245   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:06.226659   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:06.537835   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:06.598142   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:06.728451   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.037908   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:07.097779   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:07.226748   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.760617   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:07.761555   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:07.762135   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.037684   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:08.097070   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.226609   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:08.537872   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:08.597484   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:08.726246   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.038347   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:09.097296   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:09.226643   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:09.538947   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:09.597750   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:09.726035   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:10.038995   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:10.098097   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.226857   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:10.538676   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:10.597792   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:10.728771   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.043083   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:11.096706   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:11.225783   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:11.537908   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:11.597348   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:11.726594   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.039426   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:12.141349   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:12.228146   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:12.538188   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:12.598030   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:12.731081   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.037781   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:13.097507   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:13.226661   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:13.538384   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:13.596423   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:13.731049   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.038432   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:14.097818   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:14.225544   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:14.539292   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:14.598499   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:14.726155   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.039595   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:15.097356   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:15.226379   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:15.538357   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:15.597197   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:15.726266   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.038027   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:16.097695   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:16.226390   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:16.538688   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:16.597448   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:16.726849   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.038927   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:17.097951   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:17.225970   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:17.537619   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:17.597505   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:17.727059   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.037632   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:18.096974   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:18.226160   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:18.537798   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:18.596961   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:18.938328   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.161578   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:19.161871   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:19.225677   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:19.537344   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:19.596724   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:19.726000   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.037681   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:20.096937   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:20.226861   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:20.539847   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:20.598454   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:20.726816   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.037955   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:21.139202   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:21.226423   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:21.538122   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:21.597670   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:21.726032   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:22.038466   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:22.097214   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.226115   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:22.538162   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:22.598528   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:22.726622   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.038990   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:23.108179   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:23.232788   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:23.537921   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:23.597656   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:23.725545   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:24.038901   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:24.098156   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:24.227084   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:24.541227   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:24.597895   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:24.726196   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.038507   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:25.104071   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:25.226381   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:25.538966   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:25.596676   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:25.726064   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.037709   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:26.097304   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:26.226609   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:26.538618   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:26.596921   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:26.726894   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.039216   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:27.097596   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:27.226548   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:27.618277   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:27.620145   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:27.726374   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.039951   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:28.140902   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:28.240174   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:28.538840   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:28.598884   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:28.729716   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.039453   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:29.097689   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:29.225624   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:29.538773   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:29.596838   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:29.726082   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.044193   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:30.098448   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:30.226246   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:30.547999   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:30.650630   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:30.725209   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.046620   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:31.097600   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:31.226499   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:31.539195   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:31.604095   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:31.732460   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.038256   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:32.097825   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:32.226454   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:32.538908   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:32.597558   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:32.726426   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.038241   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:33.139546   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:33.225816   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:33.538377   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:33.598428   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:33.727003   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.043262   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:34.145522   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:34.242625   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:34.538961   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:34.599129   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:34.726935   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.037743   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:35.097363   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:35.225550   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:35.539226   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:35.597486   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:35.726019   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.037830   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:36.097303   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:36.230689   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:36.538495   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:36.598440   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:36.728285   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.040053   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:37.140046   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:37.227270   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:37.538243   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:37.597895   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:37.726362   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:38.038857   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:38.097551   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:38.227580   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:38.539134   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:38.640058   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:38.726213   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:39.038974   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:39.099422   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:39.227643   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:39.538762   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:39.597799   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:49:39.726722   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:40.039788   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:40.099827   17401 kapi.go:107] duration metric: took 1m6.507096683s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:49:40.227951   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:40.538483   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:40.725624   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:41.039719   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:41.227109   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:41.540186   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:41.726512   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:42.038597   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:42.227385   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:42.538617   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:42.725595   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:43.188196   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:43.229131   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:43.538175   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:43.725721   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:44.038313   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:44.226209   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:44.539141   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:44.727854   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:45.038316   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:45.227568   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:45.538315   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:45.725967   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:46.038053   17401 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:49:46.226688   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:46.539059   17401 kapi.go:107] duration metric: took 1m14.505379696s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:49:46.726049   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:47.226280   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:47.726031   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:48.230838   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:48.726211   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:49.226106   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:49.726519   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:50.226412   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:50.725990   17401 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:49:51.226985   17401 kapi.go:107] duration metric: took 1m16.004566477s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:49:51.228652   17401 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-421083 cluster.
	I1009 18:49:51.229917   17401 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:49:51.231288   17401 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:49:51.232694   17401 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, nvidia-device-plugin, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1009 18:49:51.233922   17401 addons.go:510] duration metric: took 1m27.249612449s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server nvidia-device-plugin inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1009 18:49:51.233955   17401 start.go:246] waiting for cluster config update ...
	I1009 18:49:51.233974   17401 start.go:255] writing updated cluster config ...
	I1009 18:49:51.234198   17401 ssh_runner.go:195] Run: rm -f paused
	I1009 18:49:51.287148   17401 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 18:49:51.288903   17401 out.go:177] * Done! kubectl is now configured to use "addons-421083" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.220436867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6662dc4-c747-422a-b844-6d7861efca91 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.223110009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ddff3c00-3b5e-4aa2-b220-73d2e40e7bfb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.224236119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500613224213519,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ddff3c00-3b5e-4aa2-b220-73d2e40e7bfb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.224963414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c293fb94-bcd8-4995-ac4d-e185644bf3cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.225084320Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c293fb94-bcd8-4995-ac4d-e185644bf3cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.225332518Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e475e65936c2881582d2015bb3a9d9b8fa80a2f7956a73e501da0dfa7d94e646,PodSandboxId:05faab4df8be000e0e38837ddcf58a15a6c7374623fcf186b1aa9e94d75b3b42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728500472955971044,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hcpz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 813d520c-a411-406a-8178-0933a95697c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebcd866002d1f847e6e747abc203615a9cd4fc3501a88c8b196f115a18477d7,PodSandboxId:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728500433788762445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e10b0fae17697e64fc5c769d89e5e479042703e49a0ff5c9323f60f03be88d5,PodSandboxId:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728500332981793147,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a4567afd2d60140e6acc64ed7cc0ae417fce099f3b4fb426808ba4455381d,PodSandboxId:1e69577e6e19708564f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728499751957608174,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2,PodSandboxId:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744be9255,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728499710899992381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3,PodSandboxId:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728499707168164382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9,PodSandboxId:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728499704964242194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0,PodSandboxId:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728499693684933672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6902ff7c31989979513f93e0a1f83cd6f56388007f443860c3784d8d7e7a139,PodSandboxId:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNI
NG,CreatedAt:1728499693700326668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b631e95bf64ef673c62c8365fb17901cc9d3dc8731b798ba14c26c7e155d2d4b,PodSandboxId:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNN
ING,CreatedAt:1728499693678213932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3aab9ef167bb37ec2b65ba9ee7323586496af8d304e6c2e546d54b8bfbe416,PodSandboxId:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
8499693631591613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c293fb94-bcd8-4995-ac4d-e185644bf3cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.262631541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=012ac457-68d5-4652-ae3a-2b240a6347f4 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.262700653Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=012ac457-68d5-4652-ae3a-2b240a6347f4 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.263805003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=19a7e447-92f7-40d7-8a9c-e734c7337d4e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.264964161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500613264941288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=19a7e447-92f7-40d7-8a9c-e734c7337d4e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.265429873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7610af83-b2bf-4509-b7eb-f9e687e9537b name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.265503847Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7610af83-b2bf-4509-b7eb-f9e687e9537b name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.265936015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e475e65936c2881582d2015bb3a9d9b8fa80a2f7956a73e501da0dfa7d94e646,PodSandboxId:05faab4df8be000e0e38837ddcf58a15a6c7374623fcf186b1aa9e94d75b3b42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728500472955971044,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hcpz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 813d520c-a411-406a-8178-0933a95697c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebcd866002d1f847e6e747abc203615a9cd4fc3501a88c8b196f115a18477d7,PodSandboxId:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728500433788762445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e10b0fae17697e64fc5c769d89e5e479042703e49a0ff5c9323f60f03be88d5,PodSandboxId:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728500332981793147,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a4567afd2d60140e6acc64ed7cc0ae417fce099f3b4fb426808ba4455381d,PodSandboxId:1e69577e6e19708564f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728499751957608174,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2,PodSandboxId:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744be9255,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728499710899992381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3,PodSandboxId:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728499707168164382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9,PodSandboxId:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728499704964242194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0,PodSandboxId:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728499693684933672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6902ff7c31989979513f93e0a1f83cd6f56388007f443860c3784d8d7e7a139,PodSandboxId:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNI
NG,CreatedAt:1728499693700326668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b631e95bf64ef673c62c8365fb17901cc9d3dc8731b798ba14c26c7e155d2d4b,PodSandboxId:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNN
ING,CreatedAt:1728499693678213932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3aab9ef167bb37ec2b65ba9ee7323586496af8d304e6c2e546d54b8bfbe416,PodSandboxId:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
8499693631591613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7610af83-b2bf-4509-b7eb-f9e687e9537b name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.282531940Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=5db61377-8f7d-492b-8826-201862afc1e8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.282769354Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:05faab4df8be000e0e38837ddcf58a15a6c7374623fcf186b1aa9e94d75b3b42,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-hcpz4,Uid:813d520c-a411-406a-8178-0933a95697c4,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728500470038138925,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hcpz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 813d520c-a411-406a-8178-0933a95697c4,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T19:01:09.727255285Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&PodSandboxMetadata{Name:nginx,Uid:6f15c371-9273-4816-8120-e41e8534ec18,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1728500328978212910,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T18:58:48.668558889Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&PodSandboxMetadata{Name:busybox,Uid:fc74ccb7-748c-4810-bb45-a1431c16ef61,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499792153983455,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T18:49:51.842861338Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e69577e6e19708564
f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-4s5xq,Uid:cd71806c-0308-466b-917f-085718fee448,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499710085731811,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T18:48:29.763512622Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744be9255,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499709679402232,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-09T18:48:29.057579753Z,kubernetes.io/config.source: api,},RuntimeHandler:,
},&PodSandbox{Id:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&PodSandboxMetadata{Name:kube-proxy-98lbc,Uid:6a26ad94-5c33-40db-8a42-9e11d3523806,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499704388964403,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T18:48:23.476237507Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-7nvgj,Uid:b3ca0959-36fb-4d13-89c0-435f4fde16f8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499704187264513,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.p
od.name: coredns-7c65d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T18:48:23.877490478Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-421083,Uid:d941bddbb1b10378e3b1cd421862df63,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499693490236666,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d941bddbb1b10378e3b1cd421862df63,kubernetes.io/config.seen: 2024-10-09T18:48:12.829970994Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-421083,Uid:c2064a611d2a50a10c09fdc428f7bfd5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499693488533410,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.156:8443,kubernetes.io/config.hash: c2064a611d2a50a10c09fdc428f7bfd5,kubernetes.io/config.seen: 2024-10-09T18:48:12.829973429Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&PodSandboxMetadata{Name:etcd-addons-421083,Uid:66b0568f1de11882ac5e8a4842c70622,Namespace:kube-system
,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499693486457484,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.156:2379,kubernetes.io/config.hash: 66b0568f1de11882ac5e8a4842c70622,kubernetes.io/config.seen: 2024-10-09T18:48:12.829972109Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-421083,Uid:4621d746d162406ef807bfee9831bda5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728499693470535327,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4621d746d162406ef807bfee9831bda5,kubernetes.io/config.seen: 2024-10-09T18:48:12.829967916Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=5db61377-8f7d-492b-8826-201862afc1e8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.283707095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=140037d8-4790-4c3b-997f-246a429d6ccf name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.283777200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=140037d8-4790-4c3b-997f-246a429d6ccf name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.283996418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e475e65936c2881582d2015bb3a9d9b8fa80a2f7956a73e501da0dfa7d94e646,PodSandboxId:05faab4df8be000e0e38837ddcf58a15a6c7374623fcf186b1aa9e94d75b3b42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728500472955971044,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hcpz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 813d520c-a411-406a-8178-0933a95697c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebcd866002d1f847e6e747abc203615a9cd4fc3501a88c8b196f115a18477d7,PodSandboxId:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728500433788762445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e10b0fae17697e64fc5c769d89e5e479042703e49a0ff5c9323f60f03be88d5,PodSandboxId:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728500332981793147,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a4567afd2d60140e6acc64ed7cc0ae417fce099f3b4fb426808ba4455381d,PodSandboxId:1e69577e6e19708564f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728499751957608174,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2,PodSandboxId:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744be9255,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728499710899992381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3,PodSandboxId:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728499707168164382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9,PodSandboxId:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728499704964242194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0,PodSandboxId:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728499693684933672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6902ff7c31989979513f93e0a1f83cd6f56388007f443860c3784d8d7e7a139,PodSandboxId:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNI
NG,CreatedAt:1728499693700326668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b631e95bf64ef673c62c8365fb17901cc9d3dc8731b798ba14c26c7e155d2d4b,PodSandboxId:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNN
ING,CreatedAt:1728499693678213932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3aab9ef167bb37ec2b65ba9ee7323586496af8d304e6c2e546d54b8bfbe416,PodSandboxId:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
8499693631591613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=140037d8-4790-4c3b-997f-246a429d6ccf name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.304017351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=853aa762-554d-4449-a351-910b63402447 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.304146476Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=853aa762-554d-4449-a351-910b63402447 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.305009247Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fdf4c651-d6a7-426b-bc9c-fc0c9b61ad91 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.306937778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500613306912870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdf4c651-d6a7-426b-bc9c-fc0c9b61ad91 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.307591764Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f31a24c1-c29a-457c-b6b5-3101174b99a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.307658610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f31a24c1-c29a-457c-b6b5-3101174b99a1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:03:33 addons-421083 crio[659]: time="2024-10-09 19:03:33.307893693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e475e65936c2881582d2015bb3a9d9b8fa80a2f7956a73e501da0dfa7d94e646,PodSandboxId:05faab4df8be000e0e38837ddcf58a15a6c7374623fcf186b1aa9e94d75b3b42,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728500472955971044,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-hcpz4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 813d520c-a411-406a-8178-0933a95697c4,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebcd866002d1f847e6e747abc203615a9cd4fc3501a88c8b196f115a18477d7,PodSandboxId:f3e61dc7e25e6eaf9779d3ca183a5eed9bc3f79d8b53ee97710739d22e3c2ac2,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728500433788762445,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fc74ccb7-748c-4810-bb45-a1431c16ef61,},Annotations:map[string]string{i
o.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e10b0fae17697e64fc5c769d89e5e479042703e49a0ff5c9323f60f03be88d5,PodSandboxId:5559e240f9840f86502add65a85c126be3df0fec41199f3d98ef423d18c1172f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728500332981793147,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f15c371-9273-4816-8120-e41e8534ec18,},Annotations:map[string]string{io.kubernetes.container.hash
: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:254a4567afd2d60140e6acc64ed7cc0ae417fce099f3b4fb426808ba4455381d,PodSandboxId:1e69577e6e19708564f9fc04ddfad8aaf96579877fe0d7d2a436a58081c8116a,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728499751957608174,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-4s5xq,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: cd71806c-0308-466b-917f-085718fee448,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2,PodSandboxId:0911a44088ff1d383089c9739850313a2b4753a96dd4c6709f3bd9f744be9255,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728499710899992381,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner
,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6d6fe55-28f1-4d11-98c0-a7f23c9e34ba,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3,PodSandboxId:06bcb6b7d13a04301ace583077ad795c17fee22cb6f127bb1caddc6acf917096,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728499707168164382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c6
5d6cfc9-7nvgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3ca0959-36fb-4d13-89c0-435f4fde16f8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9,PodSandboxId:0da18081b4724e8dafec43b7f989c58bdb1a96665a7e93b8d7217820910c89df,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d13
1805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728499704964242194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98lbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a26ad94-5c33-40db-8a42-9e11d3523806,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0,PodSandboxId:aca54aae7f3a7891706c5c0ab6751dc82a6f18ac0a122751fa16624587f7c966,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8
d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728499693684933672,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66b0568f1de11882ac5e8a4842c70622,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6902ff7c31989979513f93e0a1f83cd6f56388007f443860c3784d8d7e7a139,PodSandboxId:782b171e5be941317f9bbc56a980f7ca9e2e87120b855a31a18ac99ed406ad3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNI
NG,CreatedAt:1728499693700326668,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4621d746d162406ef807bfee9831bda5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b631e95bf64ef673c62c8365fb17901cc9d3dc8731b798ba14c26c7e155d2d4b,PodSandboxId:069f31703e472090a1c8656b40ed3d06bfda466671d1a8b238d4d2b46f744520,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNN
ING,CreatedAt:1728499693678213932,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d941bddbb1b10378e3b1cd421862df63,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e3aab9ef167bb37ec2b65ba9ee7323586496af8d304e6c2e546d54b8bfbe416,PodSandboxId:d3bfa5517f5dd4289f8615169a66d42db15931dc535dfb47d887e8f170abd956,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172
8499693631591613,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-421083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c2064a611d2a50a10c09fdc428f7bfd5,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f31a24c1-c29a-457c-b6b5-3101174b99a1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e475e65936c28       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   05faab4df8be0       hello-world-app-55bf9c44b4-hcpz4
	4ebcd866002d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     2 minutes ago       Running             busybox                   0                   f3e61dc7e25e6       busybox
	7e10b0fae1769       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago       Running             nginx                     0                   5559e240f9840       nginx
	254a4567afd2d       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago      Running             metrics-server            0                   1e69577e6e197       metrics-server-84c5f94fbc-4s5xq
	9dc0d87ec4e28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   0911a44088ff1       storage-provisioner
	24e77cea269e4       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   06bcb6b7d13a0       coredns-7c65d6cfc9-7nvgj
	5eb98519fb296       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   0da18081b4724       kube-proxy-98lbc
	f6902ff7c3198       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   782b171e5be94       kube-controller-manager-addons-421083
	5752f9d7d67df       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   aca54aae7f3a7       etcd-addons-421083
	b631e95bf64ef       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   069f31703e472       kube-scheduler-addons-421083
	2e3aab9ef167b       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   d3bfa5517f5dd       kube-apiserver-addons-421083
	
	
	==> coredns [24e77cea269e4acab7e68470fcf4b5042abed610603f013a17ad27b8498fe3c3] <==
	[INFO] 10.244.0.20:52196 - 36884 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069639s
	[INFO] 10.244.0.20:52196 - 29046 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061808s
	[INFO] 10.244.0.20:52196 - 7001 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000121277s
	[INFO] 10.244.0.20:33338 - 50589 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000063799s
	[INFO] 10.244.0.20:52196 - 25185 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000172178s
	[INFO] 10.244.0.20:33338 - 63485 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000046112s
	[INFO] 10.244.0.20:33338 - 39599 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000126591s
	[INFO] 10.244.0.20:33338 - 65383 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068628s
	[INFO] 10.244.0.20:33338 - 18386 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000111461s
	[INFO] 10.244.0.20:33338 - 33017 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000063987s
	[INFO] 10.244.0.20:33338 - 6246 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000149736s
	[INFO] 10.244.0.20:59573 - 27592 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000098615s
	[INFO] 10.244.0.20:46587 - 41017 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000090471s
	[INFO] 10.244.0.20:59573 - 57930 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000218372s
	[INFO] 10.244.0.20:59573 - 32081 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093627s
	[INFO] 10.244.0.20:46587 - 21502 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000051599s
	[INFO] 10.244.0.20:59573 - 9738 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085348s
	[INFO] 10.244.0.20:46587 - 40935 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043547s
	[INFO] 10.244.0.20:59573 - 26166 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000147884s
	[INFO] 10.244.0.20:46587 - 17234 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000101867s
	[INFO] 10.244.0.20:46587 - 50811 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000257069s
	[INFO] 10.244.0.20:59573 - 6056 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000428645s
	[INFO] 10.244.0.20:46587 - 5607 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085877s
	[INFO] 10.244.0.20:59573 - 42368 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070998s
	[INFO] 10.244.0.20:46587 - 52012 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000152649s
	
	
	==> describe nodes <==
	Name:               addons-421083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-421083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=addons-421083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T18_48_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-421083
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 18:48:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-421083
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:03:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:01:23 +0000   Wed, 09 Oct 2024 18:48:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:01:23 +0000   Wed, 09 Oct 2024 18:48:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:01:23 +0000   Wed, 09 Oct 2024 18:48:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:01:23 +0000   Wed, 09 Oct 2024 18:48:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    addons-421083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 75e2a6cd148147469f518c75962f3bbf
	  System UUID:                75e2a6cd-1481-4746-9f51-8c75962f3bbf
	  Boot ID:                    0e0c5f47-c02c-48b8-acd3-0a67c93483b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-hcpz4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 coredns-7c65d6cfc9-7nvgj                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-421083                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-421083             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-421083    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-98lbc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-421083             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-4s5xq          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-421083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-421083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-421083 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node addons-421083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node addons-421083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node addons-421083 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node addons-421083 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-421083 event: Registered Node addons-421083 in Controller
	
	
	==> dmesg <==
	[  +5.846157] systemd-fstab-generator[1344]: Ignoring "noauto" option for root device
	[  +0.117755] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.009528] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.032849] kauditd_printk_skb: 157 callbacks suppressed
	[  +8.184113] kauditd_printk_skb: 36 callbacks suppressed
	[ +17.999127] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 9 18:49] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.669872] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.322968] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.434906] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.178339] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.167095] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.203594] kauditd_printk_skb: 4 callbacks suppressed
	[Oct 9 18:50] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 9 18:58] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.139384] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.651026] kauditd_printk_skb: 25 callbacks suppressed
	[  +5.156133] kauditd_printk_skb: 42 callbacks suppressed
	[ +11.838672] kauditd_printk_skb: 42 callbacks suppressed
	[ +11.795204] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.658853] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.285162] kauditd_printk_skb: 27 callbacks suppressed
	[Oct 9 18:59] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 9 19:00] kauditd_printk_skb: 49 callbacks suppressed
	[Oct 9 19:01] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [5752f9d7d67df9563a07a1052d2e1193e3d9c826fe338265f4b2057edc65a2b0] <==
	{"level":"info","ts":"2024-10-09T18:49:18.925130Z","caller":"traceutil/trace.go:171","msg":"trace[1822636989] transaction","detail":"{read_only:false; response_revision:968; number_of_response:1; }","duration":"222.835999ms","start":"2024-10-09T18:49:18.702277Z","end":"2024-10-09T18:49:18.925113Z","steps":["trace[1822636989] 'process raft request'  (duration: 222.441235ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:49:19.145947Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.535302ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:49:19.146012Z","caller":"traceutil/trace.go:171","msg":"trace[1382877953] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:968; }","duration":"120.619504ms","start":"2024-10-09T18:49:19.025382Z","end":"2024-10-09T18:49:19.146002Z","steps":["trace[1382877953] 'range keys from in-memory index tree'  (duration: 120.434877ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:49:19.146170Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.775854ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:49:19.146201Z","caller":"traceutil/trace.go:171","msg":"trace[1721120419] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:968; }","duration":"205.815551ms","start":"2024-10-09T18:49:18.940379Z","end":"2024-10-09T18:49:19.146194Z","steps":["trace[1721120419] 'range keys from in-memory index tree'  (duration: 205.725803ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:49:27.595961Z","caller":"traceutil/trace.go:171","msg":"trace[1766742759] transaction","detail":"{read_only:false; response_revision:1019; number_of_response:1; }","duration":"150.280237ms","start":"2024-10-09T18:49:27.445646Z","end":"2024-10-09T18:49:27.595926Z","steps":["trace[1766742759] 'process raft request'  (duration: 145.972212ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:49:43.170696Z","caller":"traceutil/trace.go:171","msg":"trace[1012372526] linearizableReadLoop","detail":"{readStateIndex:1148; appliedIndex:1147; }","duration":"256.466952ms","start":"2024-10-09T18:49:42.914206Z","end":"2024-10-09T18:49:43.170673Z","steps":["trace[1012372526] 'read index received'  (duration: 252.704542ms)","trace[1012372526] 'applied index is now lower than readState.Index'  (duration: 3.761549ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T18:49:43.170894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.657701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-09T18:49:43.170932Z","caller":"traceutil/trace.go:171","msg":"trace[122982882] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1114; }","duration":"256.722135ms","start":"2024-10-09T18:49:42.914202Z","end":"2024-10-09T18:49:43.170925Z","steps":["trace[122982882] 'agreement among raft nodes before linearized reading'  (duration: 256.597377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:49:43.171205Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.810461ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:49:43.171247Z","caller":"traceutil/trace.go:171","msg":"trace[989403056] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1114; }","duration":"145.862938ms","start":"2024-10-09T18:49:43.025377Z","end":"2024-10-09T18:49:43.171240Z","steps":["trace[989403056] 'agreement among raft nodes before linearized reading'  (duration: 145.791706ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:58:12.978697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.231892ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:58:12.978906Z","caller":"traceutil/trace.go:171","msg":"trace[237117682] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:2027; }","duration":"112.555179ms","start":"2024-10-09T18:58:12.866342Z","end":"2024-10-09T18:58:12.978897Z","steps":["trace[237117682] 'agreement among raft nodes before linearized reading'  (duration: 112.20562ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:58:12.978382Z","caller":"traceutil/trace.go:171","msg":"trace[1973752009] linearizableReadLoop","detail":"{readStateIndex:2172; appliedIndex:2171; }","duration":"112.012437ms","start":"2024-10-09T18:58:12.866345Z","end":"2024-10-09T18:58:12.978358Z","steps":["trace[1973752009] 'read index received'  (duration: 107.540151ms)","trace[1973752009] 'applied index is now lower than readState.Index'  (duration: 4.471457ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T18:58:14.526736Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1502}
	{"level":"info","ts":"2024-10-09T18:58:14.615134Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1502,"took":"87.786338ms","hash":1603094992,"current-db-size-bytes":6234112,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":3518464,"current-db-size-in-use":"3.5 MB"}
	{"level":"info","ts":"2024-10-09T18:58:14.615185Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1603094992,"revision":1502,"compact-revision":-1}
	{"level":"info","ts":"2024-10-09T18:58:41.047459Z","caller":"traceutil/trace.go:171","msg":"trace[1415707454] linearizableReadLoop","detail":"{readStateIndex:2391; appliedIndex:2390; }","duration":"105.614078ms","start":"2024-10-09T18:58:40.941816Z","end":"2024-10-09T18:58:41.047431Z","steps":["trace[1415707454] 'read index received'  (duration: 105.391166ms)","trace[1415707454] 'applied index is now lower than readState.Index'  (duration: 222.452µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T18:58:41.047619Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.773802ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:58:41.047641Z","caller":"traceutil/trace.go:171","msg":"trace[918423557] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2232; }","duration":"105.823834ms","start":"2024-10-09T18:58:40.941811Z","end":"2024-10-09T18:58:41.047635Z","steps":["trace[918423557] 'agreement among raft nodes before linearized reading'  (duration: 105.758777ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:58:41.047737Z","caller":"traceutil/trace.go:171","msg":"trace[1215509480] transaction","detail":"{read_only:false; response_revision:2232; number_of_response:1; }","duration":"367.962133ms","start":"2024-10-09T18:58:40.679756Z","end":"2024-10-09T18:58:41.047719Z","steps":["trace[1215509480] 'process raft request'  (duration: 367.562227ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:58:41.047846Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-09T18:58:40.679739Z","time spent":"368.031201ms","remote":"127.0.0.1:45202","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:2198 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-10-09T19:03:14.535286Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2055}
	{"level":"info","ts":"2024-10-09T19:03:14.557806Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2055,"took":"21.836028ms","hash":2583765682,"current-db-size-bytes":6234112,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4452352,"current-db-size-in-use":"4.5 MB"}
	{"level":"info","ts":"2024-10-09T19:03:14.557870Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2583765682,"revision":2055,"compact-revision":1502}
	
	
	==> kernel <==
	 19:03:33 up 15 min,  0 users,  load average: 0.08, 0.26, 0.29
	Linux addons-421083 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2e3aab9ef167bb37ec2b65ba9ee7323586496af8d304e6c2e546d54b8bfbe416] <==
	E1009 18:50:20.122416       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.189.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.189.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.189.144:443: connect: connection refused" logger="UnhandledError"
	E1009 18:50:20.128396       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.105.189.144:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.105.189.144:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.105.189.144:443: connect: connection refused" logger="UnhandledError"
	I1009 18:50:20.187912       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1009 18:58:03.776001       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.91.111"}
	I1009 18:58:31.130124       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1009 18:58:32.186435       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E1009 18:58:34.589694       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1009 18:58:48.521286       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 18:58:48.710844       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.244.253"}
	I1009 18:58:49.731782       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 18:59:21.986735       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:21.992546       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:59:22.012155       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:22.012246       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:59:22.021961       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:22.022021       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:59:22.126128       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:22.126445       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:59:22.138603       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:59:22.138679       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 18:59:23.128402       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 18:59:23.139670       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 18:59:23.140520       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1009 19:01:09.869251       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.105.90"}
	E1009 19:01:14.558257       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [f6902ff7c31989979513f93e0a1f83cd6f56388007f443860c3784d8d7e7a139] <==
	E1009 19:01:33.899450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:01:34.169131       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:01:34.169233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:01:43.122103       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:01:43.122224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:01:46.108365       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:01:46.108427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:08.346199       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:08.346309       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:10.933955       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:10.934012       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:31.166591       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:31.166804       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:35.809418       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:35.809551       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:49.845186       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:49.845249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:55.009345       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:55.009411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:03:26.161372       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:26.161546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:03:27.126699       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:27.126809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:03:30.327486       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:03:30.327534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [5eb98519fb296c0a802a2e6e3efa36c196bbf5d5e9f4f54c0ecf2b98d9fdf1c9] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 18:48:25.914156       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 18:48:25.933709       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.156"]
	E1009 18:48:25.933781       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:48:26.021158       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 18:48:26.021221       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 18:48:26.021249       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:48:26.028960       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:48:26.029265       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:48:26.029279       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:48:26.038551       1 config.go:199] "Starting service config controller"
	I1009 18:48:26.038576       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:48:26.038620       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:48:26.038625       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:48:26.039111       1 config.go:328] "Starting node config controller"
	I1009 18:48:26.039119       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:48:26.139252       1 shared_informer.go:320] Caches are synced for node config
	I1009 18:48:26.139279       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:48:26.139303       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b631e95bf64ef673c62c8365fb17901cc9d3dc8731b798ba14c26c7e155d2d4b] <==
	W1009 18:48:16.081195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:48:16.081445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:16.943253       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 18:48:16.943373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.028396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 18:48:17.028532       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.065115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 18:48:17.065154       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.166188       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 18:48:17.166243       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.171672       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 18:48:17.171714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.200481       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 18:48:17.200531       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.211812       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 18:48:17.212112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.223949       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 18:48:17.223987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.286257       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 18:48:17.286349       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 18:48:17.306236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 18:48:17.306367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:48:17.312318       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 18:48:17.312424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1009 18:48:19.268151       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:02:18 addons-421083 kubelet[1205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 19:02:18 addons-421083 kubelet[1205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:02:18 addons-421083 kubelet[1205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:02:18 addons-421083 kubelet[1205]: E1009 19:02:18.978182    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500538977849959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:18 addons-421083 kubelet[1205]: E1009 19:02:18.978311    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500538977849959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:28 addons-421083 kubelet[1205]: E1009 19:02:28.982726    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500548981998154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:28 addons-421083 kubelet[1205]: E1009 19:02:28.982826    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500548981998154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:38 addons-421083 kubelet[1205]: E1009 19:02:38.986344    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500558985948490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:38 addons-421083 kubelet[1205]: E1009 19:02:38.986707    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500558985948490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:48 addons-421083 kubelet[1205]: E1009 19:02:48.990834    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500568990426102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:48 addons-421083 kubelet[1205]: E1009 19:02:48.990878    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500568990426102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:58 addons-421083 kubelet[1205]: E1009 19:02:58.995600    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500578995112688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:58 addons-421083 kubelet[1205]: E1009 19:02:58.995686    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500578995112688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:08 addons-421083 kubelet[1205]: E1009 19:03:08.998297    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500588997953959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:08 addons-421083 kubelet[1205]: E1009 19:03:08.998338    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500588997953959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:10 addons-421083 kubelet[1205]: I1009 19:03:10.514863    1205 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:03:18 addons-421083 kubelet[1205]: E1009 19:03:18.530181    1205 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 19:03:18 addons-421083 kubelet[1205]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 19:03:18 addons-421083 kubelet[1205]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 19:03:18 addons-421083 kubelet[1205]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:03:18 addons-421083 kubelet[1205]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:03:19 addons-421083 kubelet[1205]: E1009 19:03:19.000370    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500598999880804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:19 addons-421083 kubelet[1205]: E1009 19:03:19.000491    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500598999880804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:29 addons-421083 kubelet[1205]: E1009 19:03:29.003064    1205 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500609002556734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:03:29 addons-421083 kubelet[1205]: E1009 19:03:29.003335    1205 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500609002556734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:582800,},InodesUsed:&UInt64Value{Value:202,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9dc0d87ec4e2897409b22b3479550f193e34db3d10f198487728c8d812418ac2] <==
	I1009 18:48:31.827201       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:48:31.932781       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:48:31.932854       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:48:32.203526       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:48:32.208253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-421083_0295e05b-8930-4ad1-a906-0a4a85bb781d!
	I1009 18:48:32.219726       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7c7228b5-eb03-4914-bf6e-0a6716f3b445", APIVersion:"v1", ResourceVersion:"728", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-421083_0295e05b-8930-4ad1-a906-0a4a85bb781d became leader
	I1009 18:48:32.412488       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-421083_0295e05b-8930-4ad1-a906-0a4a85bb781d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-421083 -n addons-421083
helpers_test.go:261: (dbg) Run:  kubectl --context addons-421083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (320.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-421083
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-421083: exit status 82 (2m0.454626454s)

                                                
                                                
-- stdout --
	* Stopping node "addons-421083"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-421083" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-421083
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-421083: exit status 11 (21.50292439s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-421083" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-421083
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-421083: exit status 11 (6.142796853s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-421083" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-421083
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-421083: exit status 11 (6.144550725s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.156:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-421083" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 node stop m02 -v=7 --alsologtostderr
E1009 19:15:32.590470   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:16:13.551915   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-199780 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.453555771s)

                                                
                                                
-- stdout --
	* Stopping node "ha-199780-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:15:19.695980   32734 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:15:19.696109   32734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:15:19.696118   32734 out.go:358] Setting ErrFile to fd 2...
	I1009 19:15:19.696122   32734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:15:19.696285   32734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:15:19.696532   32734 mustload.go:65] Loading cluster: ha-199780
	I1009 19:15:19.696897   32734 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:15:19.696913   32734 stop.go:39] StopHost: ha-199780-m02
	I1009 19:15:19.697247   32734 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:15:19.697290   32734 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:15:19.712749   32734 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I1009 19:15:19.713184   32734 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:15:19.713693   32734 main.go:141] libmachine: Using API Version  1
	I1009 19:15:19.713716   32734 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:15:19.714093   32734 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:15:19.716629   32734 out.go:177] * Stopping node "ha-199780-m02"  ...
	I1009 19:15:19.717893   32734 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1009 19:15:19.717935   32734 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:15:19.718191   32734 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1009 19:15:19.718234   32734 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:15:19.721261   32734 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:15:19.721653   32734 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:15:19.721694   32734 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:15:19.721810   32734 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:15:19.721956   32734 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:15:19.722089   32734 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:15:19.722211   32734 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:15:19.802741   32734 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1009 19:15:19.856771   32734 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1009 19:15:19.912237   32734 main.go:141] libmachine: Stopping "ha-199780-m02"...
	I1009 19:15:19.912266   32734 main.go:141] libmachine: (ha-199780-m02) Calling .GetState
	I1009 19:15:19.913860   32734 main.go:141] libmachine: (ha-199780-m02) Calling .Stop
	I1009 19:15:19.917328   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 0/120
	I1009 19:15:20.919472   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 1/120
	I1009 19:15:21.921704   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 2/120
	I1009 19:15:22.923163   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 3/120
	I1009 19:15:23.924509   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 4/120
	I1009 19:15:24.926196   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 5/120
	I1009 19:15:25.927574   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 6/120
	I1009 19:15:26.929856   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 7/120
	I1009 19:15:27.931097   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 8/120
	I1009 19:15:28.932509   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 9/120
	I1009 19:15:29.934800   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 10/120
	I1009 19:15:30.936148   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 11/120
	I1009 19:15:31.937524   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 12/120
	I1009 19:15:32.939120   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 13/120
	I1009 19:15:33.940672   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 14/120
	I1009 19:15:34.942503   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 15/120
	I1009 19:15:35.943718   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 16/120
	I1009 19:15:36.945451   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 17/120
	I1009 19:15:37.946699   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 18/120
	I1009 19:15:38.948079   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 19/120
	I1009 19:15:39.949902   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 20/120
	I1009 19:15:40.951012   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 21/120
	I1009 19:15:41.952276   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 22/120
	I1009 19:15:42.953771   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 23/120
	I1009 19:15:43.955084   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 24/120
	I1009 19:15:44.957127   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 25/120
	I1009 19:15:45.958267   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 26/120
	I1009 19:15:46.959630   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 27/120
	I1009 19:15:47.960923   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 28/120
	I1009 19:15:48.962579   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 29/120
	I1009 19:15:49.964688   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 30/120
	I1009 19:15:50.966140   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 31/120
	I1009 19:15:51.967428   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 32/120
	I1009 19:15:52.969556   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 33/120
	I1009 19:15:53.970999   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 34/120
	I1009 19:15:54.972841   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 35/120
	I1009 19:15:55.974266   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 36/120
	I1009 19:15:56.975923   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 37/120
	I1009 19:15:57.977159   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 38/120
	I1009 19:15:58.979176   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 39/120
	I1009 19:15:59.981196   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 40/120
	I1009 19:16:00.982581   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 41/120
	I1009 19:16:01.984482   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 42/120
	I1009 19:16:02.985795   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 43/120
	I1009 19:16:03.987215   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 44/120
	I1009 19:16:04.989025   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 45/120
	I1009 19:16:05.990689   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 46/120
	I1009 19:16:06.991959   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 47/120
	I1009 19:16:07.993217   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 48/120
	I1009 19:16:08.994499   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 49/120
	I1009 19:16:09.996029   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 50/120
	I1009 19:16:10.997479   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 51/120
	I1009 19:16:11.998765   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 52/120
	I1009 19:16:13.000168   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 53/120
	I1009 19:16:14.001353   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 54/120
	I1009 19:16:15.003336   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 55/120
	I1009 19:16:16.005577   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 56/120
	I1009 19:16:17.006957   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 57/120
	I1009 19:16:18.008325   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 58/120
	I1009 19:16:19.009722   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 59/120
	I1009 19:16:20.011874   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 60/120
	I1009 19:16:21.013197   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 61/120
	I1009 19:16:22.014728   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 62/120
	I1009 19:16:23.015855   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 63/120
	I1009 19:16:24.017083   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 64/120
	I1009 19:16:25.019129   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 65/120
	I1009 19:16:26.020883   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 66/120
	I1009 19:16:27.022095   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 67/120
	I1009 19:16:28.023560   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 68/120
	I1009 19:16:29.025744   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 69/120
	I1009 19:16:30.027691   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 70/120
	I1009 19:16:31.029509   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 71/120
	I1009 19:16:32.030691   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 72/120
	I1009 19:16:33.032084   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 73/120
	I1009 19:16:34.033412   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 74/120
	I1009 19:16:35.035328   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 75/120
	I1009 19:16:36.036563   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 76/120
	I1009 19:16:37.037791   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 77/120
	I1009 19:16:38.039128   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 78/120
	I1009 19:16:39.041264   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 79/120
	I1009 19:16:40.043028   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 80/120
	I1009 19:16:41.044554   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 81/120
	I1009 19:16:42.045955   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 82/120
	I1009 19:16:43.047279   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 83/120
	I1009 19:16:44.049477   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 84/120
	I1009 19:16:45.051498   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 85/120
	I1009 19:16:46.053480   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 86/120
	I1009 19:16:47.054721   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 87/120
	I1009 19:16:48.056294   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 88/120
	I1009 19:16:49.058322   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 89/120
	I1009 19:16:50.060226   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 90/120
	I1009 19:16:51.061399   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 91/120
	I1009 19:16:52.062849   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 92/120
	I1009 19:16:53.064071   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 93/120
	I1009 19:16:54.065383   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 94/120
	I1009 19:16:55.067454   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 95/120
	I1009 19:16:56.069368   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 96/120
	I1009 19:16:57.070670   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 97/120
	I1009 19:16:58.072005   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 98/120
	I1009 19:16:59.073423   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 99/120
	I1009 19:17:00.075434   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 100/120
	I1009 19:17:01.076696   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 101/120
	I1009 19:17:02.078055   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 102/120
	I1009 19:17:03.079335   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 103/120
	I1009 19:17:04.081794   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 104/120
	I1009 19:17:05.083418   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 105/120
	I1009 19:17:06.085646   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 106/120
	I1009 19:17:07.087105   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 107/120
	I1009 19:17:08.088399   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 108/120
	I1009 19:17:09.089506   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 109/120
	I1009 19:17:10.091371   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 110/120
	I1009 19:17:11.093503   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 111/120
	I1009 19:17:12.094720   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 112/120
	I1009 19:17:13.096028   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 113/120
	I1009 19:17:14.097216   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 114/120
	I1009 19:17:15.098993   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 115/120
	I1009 19:17:16.100300   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 116/120
	I1009 19:17:17.101380   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 117/120
	I1009 19:17:18.102595   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 118/120
	I1009 19:17:19.104827   32734 main.go:141] libmachine: (ha-199780-m02) Waiting for machine to stop 119/120
	I1009 19:17:20.105665   32734 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1009 19:17:20.105792   32734 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-199780 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr
E1009 19:17:35.476312   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr: (18.793421246s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-199780 -n ha-199780
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 logs -n 25: (1.353533447s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m03_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m04 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp testdata/cp-test.txt                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m04_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03:/home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m03 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-199780 node stop m02 -v=7                                                     | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:10:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:10:42.430511   28654 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:10:42.430648   28654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:42.430657   28654 out.go:358] Setting ErrFile to fd 2...
	I1009 19:10:42.430662   28654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:42.430823   28654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:10:42.431377   28654 out.go:352] Setting JSON to false
	I1009 19:10:42.432258   28654 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3183,"bootTime":1728497859,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:10:42.432357   28654 start.go:139] virtualization: kvm guest
	I1009 19:10:42.434444   28654 out.go:177] * [ha-199780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:10:42.435720   28654 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:10:42.435744   28654 notify.go:220] Checking for updates...
	I1009 19:10:42.438470   28654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:10:42.439771   28654 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:10:42.441201   28654 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:42.442550   28654 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:10:42.443839   28654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:10:42.445321   28654 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:10:42.478513   28654 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 19:10:42.479828   28654 start.go:297] selected driver: kvm2
	I1009 19:10:42.479841   28654 start.go:901] validating driver "kvm2" against <nil>
	I1009 19:10:42.479851   28654 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:10:42.480537   28654 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:42.480609   28654 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:10:42.494762   28654 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 19:10:42.494798   28654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 19:10:42.495015   28654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:10:42.495042   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:10:42.495103   28654 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:10:42.495115   28654 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:10:42.495160   28654 start.go:340] cluster config:
	{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:42.495268   28654 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:42.497127   28654 out.go:177] * Starting "ha-199780" primary control-plane node in "ha-199780" cluster
	I1009 19:10:42.498350   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:10:42.498375   28654 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:10:42.498383   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:10:42.498461   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:10:42.498474   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:10:42.498736   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:10:42.498755   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json: {Name:mkaa9f981fdc58b4cf67de89e14727a24139b9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:10:42.498888   28654 start.go:360] acquireMachinesLock for ha-199780: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:10:42.498923   28654 start.go:364] duration metric: took 18.652µs to acquireMachinesLock for "ha-199780"
	I1009 19:10:42.498944   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:10:42.499008   28654 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 19:10:42.500613   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:10:42.500730   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:42.500770   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:42.514603   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I1009 19:10:42.515116   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:42.515617   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:10:42.515660   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:42.515950   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:42.516152   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:10:42.516283   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:10:42.516418   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:10:42.516447   28654 client.go:168] LocalClient.Create starting
	I1009 19:10:42.516482   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:10:42.516515   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:10:42.516531   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:10:42.516577   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:10:42.516599   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:10:42.516612   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:10:42.516640   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:10:42.516651   28654 main.go:141] libmachine: (ha-199780) Calling .PreCreateCheck
	I1009 19:10:42.516980   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:10:42.517335   28654 main.go:141] libmachine: Creating machine...
	I1009 19:10:42.517347   28654 main.go:141] libmachine: (ha-199780) Calling .Create
	I1009 19:10:42.517467   28654 main.go:141] libmachine: (ha-199780) Creating KVM machine...
	I1009 19:10:42.518611   28654 main.go:141] libmachine: (ha-199780) DBG | found existing default KVM network
	I1009 19:10:42.519307   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.519165   28677 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1009 19:10:42.519338   28654 main.go:141] libmachine: (ha-199780) DBG | created network xml: 
	I1009 19:10:42.519353   28654 main.go:141] libmachine: (ha-199780) DBG | <network>
	I1009 19:10:42.519365   28654 main.go:141] libmachine: (ha-199780) DBG |   <name>mk-ha-199780</name>
	I1009 19:10:42.519373   28654 main.go:141] libmachine: (ha-199780) DBG |   <dns enable='no'/>
	I1009 19:10:42.519380   28654 main.go:141] libmachine: (ha-199780) DBG |   
	I1009 19:10:42.519389   28654 main.go:141] libmachine: (ha-199780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 19:10:42.519398   28654 main.go:141] libmachine: (ha-199780) DBG |     <dhcp>
	I1009 19:10:42.519408   28654 main.go:141] libmachine: (ha-199780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 19:10:42.519416   28654 main.go:141] libmachine: (ha-199780) DBG |     </dhcp>
	I1009 19:10:42.519425   28654 main.go:141] libmachine: (ha-199780) DBG |   </ip>
	I1009 19:10:42.519432   28654 main.go:141] libmachine: (ha-199780) DBG |   
	I1009 19:10:42.519439   28654 main.go:141] libmachine: (ha-199780) DBG | </network>
	I1009 19:10:42.519448   28654 main.go:141] libmachine: (ha-199780) DBG | 
	I1009 19:10:42.523998   28654 main.go:141] libmachine: (ha-199780) DBG | trying to create private KVM network mk-ha-199780 192.168.39.0/24...
	I1009 19:10:42.584957   28654 main.go:141] libmachine: (ha-199780) DBG | private KVM network mk-ha-199780 192.168.39.0/24 created
	I1009 19:10:42.584984   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.584941   28677 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:42.584995   28654 main.go:141] libmachine: (ha-199780) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 ...
	I1009 19:10:42.585010   28654 main.go:141] libmachine: (ha-199780) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:10:42.585155   28654 main.go:141] libmachine: (ha-199780) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:10:42.845983   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.845854   28677 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa...
	I1009 19:10:43.100187   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:43.100062   28677 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/ha-199780.rawdisk...
	I1009 19:10:43.100216   28654 main.go:141] libmachine: (ha-199780) DBG | Writing magic tar header
	I1009 19:10:43.100229   28654 main.go:141] libmachine: (ha-199780) DBG | Writing SSH key tar header
	I1009 19:10:43.100242   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:43.100204   28677 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 ...
	I1009 19:10:43.100332   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780
	I1009 19:10:43.100355   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 (perms=drwx------)
	I1009 19:10:43.100365   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:10:43.100376   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:10:43.100386   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:43.100399   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:10:43.100406   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:10:43.100424   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:10:43.100435   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home
	I1009 19:10:43.100443   28654 main.go:141] libmachine: (ha-199780) DBG | Skipping /home - not owner
	I1009 19:10:43.100455   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:10:43.100467   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:10:43.100476   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:10:43.100483   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:10:43.100487   28654 main.go:141] libmachine: (ha-199780) Creating domain...
	I1009 19:10:43.101601   28654 main.go:141] libmachine: (ha-199780) define libvirt domain using xml: 
	I1009 19:10:43.101609   28654 main.go:141] libmachine: (ha-199780) <domain type='kvm'>
	I1009 19:10:43.101614   28654 main.go:141] libmachine: (ha-199780)   <name>ha-199780</name>
	I1009 19:10:43.101624   28654 main.go:141] libmachine: (ha-199780)   <memory unit='MiB'>2200</memory>
	I1009 19:10:43.101632   28654 main.go:141] libmachine: (ha-199780)   <vcpu>2</vcpu>
	I1009 19:10:43.101638   28654 main.go:141] libmachine: (ha-199780)   <features>
	I1009 19:10:43.101646   28654 main.go:141] libmachine: (ha-199780)     <acpi/>
	I1009 19:10:43.101656   28654 main.go:141] libmachine: (ha-199780)     <apic/>
	I1009 19:10:43.101664   28654 main.go:141] libmachine: (ha-199780)     <pae/>
	I1009 19:10:43.101673   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.101686   28654 main.go:141] libmachine: (ha-199780)   </features>
	I1009 19:10:43.101695   28654 main.go:141] libmachine: (ha-199780)   <cpu mode='host-passthrough'>
	I1009 19:10:43.101702   28654 main.go:141] libmachine: (ha-199780)   
	I1009 19:10:43.101711   28654 main.go:141] libmachine: (ha-199780)   </cpu>
	I1009 19:10:43.101752   28654 main.go:141] libmachine: (ha-199780)   <os>
	I1009 19:10:43.101769   28654 main.go:141] libmachine: (ha-199780)     <type>hvm</type>
	I1009 19:10:43.101776   28654 main.go:141] libmachine: (ha-199780)     <boot dev='cdrom'/>
	I1009 19:10:43.101783   28654 main.go:141] libmachine: (ha-199780)     <boot dev='hd'/>
	I1009 19:10:43.101819   28654 main.go:141] libmachine: (ha-199780)     <bootmenu enable='no'/>
	I1009 19:10:43.101840   28654 main.go:141] libmachine: (ha-199780)   </os>
	I1009 19:10:43.101848   28654 main.go:141] libmachine: (ha-199780)   <devices>
	I1009 19:10:43.101855   28654 main.go:141] libmachine: (ha-199780)     <disk type='file' device='cdrom'>
	I1009 19:10:43.101864   28654 main.go:141] libmachine: (ha-199780)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/boot2docker.iso'/>
	I1009 19:10:43.101869   28654 main.go:141] libmachine: (ha-199780)       <target dev='hdc' bus='scsi'/>
	I1009 19:10:43.101877   28654 main.go:141] libmachine: (ha-199780)       <readonly/>
	I1009 19:10:43.101881   28654 main.go:141] libmachine: (ha-199780)     </disk>
	I1009 19:10:43.101887   28654 main.go:141] libmachine: (ha-199780)     <disk type='file' device='disk'>
	I1009 19:10:43.101894   28654 main.go:141] libmachine: (ha-199780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:10:43.101901   28654 main.go:141] libmachine: (ha-199780)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/ha-199780.rawdisk'/>
	I1009 19:10:43.101908   28654 main.go:141] libmachine: (ha-199780)       <target dev='hda' bus='virtio'/>
	I1009 19:10:43.101913   28654 main.go:141] libmachine: (ha-199780)     </disk>
	I1009 19:10:43.101919   28654 main.go:141] libmachine: (ha-199780)     <interface type='network'>
	I1009 19:10:43.101933   28654 main.go:141] libmachine: (ha-199780)       <source network='mk-ha-199780'/>
	I1009 19:10:43.101946   28654 main.go:141] libmachine: (ha-199780)       <model type='virtio'/>
	I1009 19:10:43.101959   28654 main.go:141] libmachine: (ha-199780)     </interface>
	I1009 19:10:43.101969   28654 main.go:141] libmachine: (ha-199780)     <interface type='network'>
	I1009 19:10:43.101978   28654 main.go:141] libmachine: (ha-199780)       <source network='default'/>
	I1009 19:10:43.101987   28654 main.go:141] libmachine: (ha-199780)       <model type='virtio'/>
	I1009 19:10:43.101995   28654 main.go:141] libmachine: (ha-199780)     </interface>
	I1009 19:10:43.102004   28654 main.go:141] libmachine: (ha-199780)     <serial type='pty'>
	I1009 19:10:43.102012   28654 main.go:141] libmachine: (ha-199780)       <target port='0'/>
	I1009 19:10:43.102025   28654 main.go:141] libmachine: (ha-199780)     </serial>
	I1009 19:10:43.102042   28654 main.go:141] libmachine: (ha-199780)     <console type='pty'>
	I1009 19:10:43.102058   28654 main.go:141] libmachine: (ha-199780)       <target type='serial' port='0'/>
	I1009 19:10:43.102072   28654 main.go:141] libmachine: (ha-199780)     </console>
	I1009 19:10:43.102081   28654 main.go:141] libmachine: (ha-199780)     <rng model='virtio'>
	I1009 19:10:43.102095   28654 main.go:141] libmachine: (ha-199780)       <backend model='random'>/dev/random</backend>
	I1009 19:10:43.102102   28654 main.go:141] libmachine: (ha-199780)     </rng>
	I1009 19:10:43.102106   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.102114   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.102124   28654 main.go:141] libmachine: (ha-199780)   </devices>
	I1009 19:10:43.102131   28654 main.go:141] libmachine: (ha-199780) </domain>
	I1009 19:10:43.102144   28654 main.go:141] libmachine: (ha-199780) 
	I1009 19:10:43.106174   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:62:13:83 in network default
	I1009 19:10:43.106715   28654 main.go:141] libmachine: (ha-199780) Ensuring networks are active...
	I1009 19:10:43.106743   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:43.107417   28654 main.go:141] libmachine: (ha-199780) Ensuring network default is active
	I1009 19:10:43.107748   28654 main.go:141] libmachine: (ha-199780) Ensuring network mk-ha-199780 is active
	I1009 19:10:43.108262   28654 main.go:141] libmachine: (ha-199780) Getting domain xml...
	I1009 19:10:43.109003   28654 main.go:141] libmachine: (ha-199780) Creating domain...
	I1009 19:10:44.275323   28654 main.go:141] libmachine: (ha-199780) Waiting to get IP...
	I1009 19:10:44.276021   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.276397   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.276440   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.276393   28677 retry.go:31] will retry after 234.976528ms: waiting for machine to come up
	I1009 19:10:44.512805   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.513239   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.513266   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.513207   28677 retry.go:31] will retry after 293.441421ms: waiting for machine to come up
	I1009 19:10:44.808637   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.809099   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.809119   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.809062   28677 retry.go:31] will retry after 303.641198ms: waiting for machine to come up
	I1009 19:10:45.114382   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:45.114813   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:45.114842   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:45.114772   28677 retry.go:31] will retry after 536.014176ms: waiting for machine to come up
	I1009 19:10:45.652428   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:45.652792   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:45.652818   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:45.652745   28677 retry.go:31] will retry after 705.110787ms: waiting for machine to come up
	I1009 19:10:46.359497   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:46.360044   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:46.360101   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:46.360017   28677 retry.go:31] will retry after 647.020654ms: waiting for machine to come up
	I1009 19:10:47.008863   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:47.009323   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:47.009364   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:47.009282   28677 retry.go:31] will retry after 1.0294982s: waiting for machine to come up
	I1009 19:10:48.039832   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:48.040304   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:48.040326   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:48.040267   28677 retry.go:31] will retry after 1.106767931s: waiting for machine to come up
	I1009 19:10:49.148646   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:49.149054   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:49.149076   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:49.149026   28677 retry.go:31] will retry after 1.376949133s: waiting for machine to come up
	I1009 19:10:50.527437   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:50.527855   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:50.527877   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:50.527806   28677 retry.go:31] will retry after 1.480550438s: waiting for machine to come up
	I1009 19:10:52.009673   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:52.010195   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:52.010224   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:52.010161   28677 retry.go:31] will retry after 2.407652517s: waiting for machine to come up
	I1009 19:10:54.420236   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:54.420627   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:54.420661   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:54.420596   28677 retry.go:31] will retry after 3.410708317s: waiting for machine to come up
	I1009 19:10:57.833396   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:57.833828   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:57.833855   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:57.833781   28677 retry.go:31] will retry after 3.08007179s: waiting for machine to come up
	I1009 19:11:00.918052   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:00.918375   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:11:00.918394   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:11:00.918349   28677 retry.go:31] will retry after 3.66383863s: waiting for machine to come up
	I1009 19:11:04.584755   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.585113   28654 main.go:141] libmachine: (ha-199780) Found IP for machine: 192.168.39.114
	I1009 19:11:04.585143   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has current primary IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.585150   28654 main.go:141] libmachine: (ha-199780) Reserving static IP address...
	I1009 19:11:04.585468   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find host DHCP lease matching {name: "ha-199780", mac: "52:54:00:5a:16:82", ip: "192.168.39.114"} in network mk-ha-199780
	I1009 19:11:04.653177   28654 main.go:141] libmachine: (ha-199780) DBG | Getting to WaitForSSH function...
	I1009 19:11:04.653210   28654 main.go:141] libmachine: (ha-199780) Reserved static IP address: 192.168.39.114
	I1009 19:11:04.653224   28654 main.go:141] libmachine: (ha-199780) Waiting for SSH to be available...
	I1009 19:11:04.655641   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.655950   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.655974   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.656128   28654 main.go:141] libmachine: (ha-199780) DBG | Using SSH client type: external
	I1009 19:11:04.656155   28654 main.go:141] libmachine: (ha-199780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa (-rw-------)
	I1009 19:11:04.656182   28654 main.go:141] libmachine: (ha-199780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:11:04.656192   28654 main.go:141] libmachine: (ha-199780) DBG | About to run SSH command:
	I1009 19:11:04.656207   28654 main.go:141] libmachine: (ha-199780) DBG | exit 0
	I1009 19:11:04.778875   28654 main.go:141] libmachine: (ha-199780) DBG | SSH cmd err, output: <nil>: 
	I1009 19:11:04.779170   28654 main.go:141] libmachine: (ha-199780) KVM machine creation complete!
	I1009 19:11:04.779478   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:11:04.780010   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:04.780176   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:04.780315   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:11:04.780331   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:04.781523   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:11:04.781541   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:11:04.781546   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:11:04.781551   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.783979   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.784330   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.784354   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.784520   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.784676   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.784815   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.784920   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.785023   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.785198   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.785208   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:11:04.886621   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:04.886642   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:11:04.886652   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.889117   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.889470   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.889489   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.889658   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.889825   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.889979   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.890105   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.890280   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.890429   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.890439   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:11:04.991626   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:11:04.991752   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:11:04.991763   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:11:04.991772   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:04.991975   28654 buildroot.go:166] provisioning hostname "ha-199780"
	I1009 19:11:04.991994   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:04.992147   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.994446   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.994806   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.994831   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.994954   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.995140   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.995287   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.995424   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.995557   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.995745   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.995756   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780 && echo "ha-199780" | sudo tee /etc/hostname
	I1009 19:11:05.113349   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780
	
	I1009 19:11:05.113396   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.116625   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.117021   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.117049   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.117198   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.117349   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.117468   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.117570   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.117692   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.117857   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.117885   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:05.228123   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:05.228148   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:11:05.228172   28654 buildroot.go:174] setting up certificates
	I1009 19:11:05.228182   28654 provision.go:84] configureAuth start
	I1009 19:11:05.228189   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:05.228442   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.230797   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.231092   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.231117   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.231241   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.233255   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.233547   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.233569   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.233652   28654 provision.go:143] copyHostCerts
	I1009 19:11:05.233688   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:05.233736   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:11:05.233748   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:05.233826   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:11:05.233942   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:05.233970   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:11:05.233976   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:05.234005   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:11:05.234063   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:05.234084   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:11:05.234090   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:05.234111   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:11:05.234159   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780 san=[127.0.0.1 192.168.39.114 ha-199780 localhost minikube]
	I1009 19:11:05.299525   28654 provision.go:177] copyRemoteCerts
	I1009 19:11:05.299577   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:05.299597   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.301859   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.302122   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.302159   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.302298   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.302456   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.302593   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.302710   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.385328   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:11:05.385392   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:11:05.408377   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:11:05.408446   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:11:05.431231   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:11:05.431308   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:11:05.454941   28654 provision.go:87] duration metric: took 226.750506ms to configureAuth
	I1009 19:11:05.454965   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:11:05.455145   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:05.455206   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.457741   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.458006   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.458042   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.458216   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.458397   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.458525   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.458644   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.458788   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.458960   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.458976   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:05.676474   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:05.676512   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:11:05.676522   28654 main.go:141] libmachine: (ha-199780) Calling .GetURL
	I1009 19:11:05.677728   28654 main.go:141] libmachine: (ha-199780) DBG | Using libvirt version 6000000
	I1009 19:11:05.679755   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.680041   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.680069   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.680196   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:11:05.680210   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:11:05.680217   28654 client.go:171] duration metric: took 23.163762708s to LocalClient.Create
	I1009 19:11:05.680235   28654 start.go:167] duration metric: took 23.163818343s to libmachine.API.Create "ha-199780"
	I1009 19:11:05.680244   28654 start.go:293] postStartSetup for "ha-199780" (driver="kvm2")
	I1009 19:11:05.680255   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:05.680269   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.680459   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:05.680481   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.682388   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.682658   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.682683   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.682747   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.682909   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.683039   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.683197   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.767177   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:05.771701   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:11:05.771721   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:11:05.771790   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:11:05.771869   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:11:05.771881   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:11:05.771984   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:11:05.783287   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:05.808917   28654 start.go:296] duration metric: took 128.662808ms for postStartSetup
	I1009 19:11:05.808956   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:11:05.809504   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.812016   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.812350   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.812373   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.812566   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:05.812738   28654 start.go:128] duration metric: took 23.313722048s to createHost
	I1009 19:11:05.812762   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.814746   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.815037   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.815078   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.815176   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.815323   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.815479   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.815598   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.815737   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.815932   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.815953   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:11:05.919951   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501065.894358321
	
	I1009 19:11:05.919974   28654 fix.go:216] guest clock: 1728501065.894358321
	I1009 19:11:05.919982   28654 fix.go:229] Guest: 2024-10-09 19:11:05.894358321 +0000 UTC Remote: 2024-10-09 19:11:05.812750418 +0000 UTC m=+23.417944098 (delta=81.607903ms)
	I1009 19:11:05.920005   28654 fix.go:200] guest clock delta is within tolerance: 81.607903ms
	I1009 19:11:05.920012   28654 start.go:83] releasing machines lock for "ha-199780", held for 23.421078352s
	I1009 19:11:05.920035   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.920263   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.922615   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.922966   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.922995   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.923150   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923568   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923734   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923824   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:05.923862   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.924006   28654 ssh_runner.go:195] Run: cat /version.json
	I1009 19:11:05.924044   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.926446   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926648   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926765   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.926802   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926912   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.927037   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.927038   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.927086   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.927223   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.927272   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.927339   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.927433   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.927750   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.927897   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:06.024499   28654 ssh_runner.go:195] Run: systemctl --version
	I1009 19:11:06.030414   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:06.185061   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:06.191423   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:06.191490   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:06.206786   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:11:06.206805   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:11:06.206857   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:06.222401   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:06.235373   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:11:06.235433   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:06.247949   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:06.260686   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:06.376406   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:06.514646   28654 docker.go:233] disabling docker service ...
	I1009 19:11:06.514703   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:06.529298   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:06.542407   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:06.674904   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:06.805457   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:06.819076   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:06.839480   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:11:06.839538   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.851838   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:11:06.851893   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.864160   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.876368   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.889066   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:06.901093   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.912169   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.929058   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.939929   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:06.949542   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:11:06.949583   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:11:06.962939   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:06.972697   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:07.093662   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:07.192295   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:07.192352   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:07.197105   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:11:07.197162   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:11:07.200935   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:11:07.247609   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:11:07.247689   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:07.275380   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:07.304930   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:11:07.306083   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:07.308768   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:07.309094   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:07.309121   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:07.309303   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:07.313459   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:07.326691   28654 kubeadm.go:883] updating cluster {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:11:07.326798   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:11:07.326859   28654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:07.358942   28654 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 19:11:07.359000   28654 ssh_runner.go:195] Run: which lz4
	I1009 19:11:07.363007   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1009 19:11:07.363119   28654 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 19:11:07.367226   28654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 19:11:07.367262   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 19:11:08.682998   28654 crio.go:462] duration metric: took 1.319910565s to copy over tarball
	I1009 19:11:08.683082   28654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 19:11:10.661640   28654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978525541s)
	I1009 19:11:10.661674   28654 crio.go:469] duration metric: took 1.978647131s to extract the tarball
	I1009 19:11:10.661683   28654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 19:11:10.698452   28654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:10.744870   28654 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:11:10.744890   28654 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:11:10.744897   28654 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.31.1 crio true true} ...
	I1009 19:11:10.744976   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:10.745041   28654 ssh_runner.go:195] Run: crio config
	I1009 19:11:10.794773   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:11:10.794792   28654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:11:10.794807   28654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:11:10.794828   28654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-199780 NodeName:ha-199780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:11:10.794978   28654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-199780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:11:10.795005   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:11:10.795055   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:11:10.811512   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:11:10.811631   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:11:10.811693   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:10.821887   28654 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:11:10.821946   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:11:10.831583   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1009 19:11:10.848385   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:10.865617   28654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1009 19:11:10.882082   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1009 19:11:10.898198   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:10.902054   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:10.914494   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:11.043972   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:11.060509   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.114
	I1009 19:11:11.060533   28654 certs.go:194] generating shared ca certs ...
	I1009 19:11:11.060553   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.060728   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:11:11.060785   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:11:11.060798   28654 certs.go:256] generating profile certs ...
	I1009 19:11:11.060867   28654 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:11:11.060891   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt with IP's: []
	I1009 19:11:11.257901   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt ...
	I1009 19:11:11.257931   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt: {Name:mke6971132fee40da37bc72041e92dde05b5c360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.258111   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key ...
	I1009 19:11:11.258127   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key: {Name:mk2c48ceaf748f5efc5f062df1cf8bf8d38b626a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.258227   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621
	I1009 19:11:11.258246   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.254]
	I1009 19:11:11.502202   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 ...
	I1009 19:11:11.502241   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621: {Name:mk85bc5cf43d418e43d8be4b6611eb785caa9f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.502445   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621 ...
	I1009 19:11:11.502463   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621: {Name:mk1d94ea93b96fe750cd9f95170ab488ca016856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.502573   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:11:11.502721   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:11:11.502815   28654 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:11:11.502839   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt with IP's: []
	I1009 19:11:11.612443   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt ...
	I1009 19:11:11.612470   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt: {Name:mk212b018e6441944e189239707af3950678c689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.612646   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key ...
	I1009 19:11:11.612656   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key: {Name:mkb7f3d492b787f9b9b56d2b48939b9971f793ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.612724   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:11:11.612740   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:11:11.612751   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:11:11.612763   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:11:11.612774   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:11:11.612786   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:11:11.612798   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:11:11.612810   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:11:11.612864   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:11:11.612897   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:11.612903   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:11:11.612926   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:11:11.612951   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:11.612971   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:11:11.613006   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:11.613033   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.613046   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.613058   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:11:11.613596   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:11.638855   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:11:11.662787   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:11.686693   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:11:11.710429   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:11:11.734032   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:11.757651   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:11.781611   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:11.805128   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:11.831515   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:11:11.878516   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:11:11.903576   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:11:11.920589   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:11:11.926400   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:11.937651   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.942167   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.942223   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.947902   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:11.959013   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:11:11.970169   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.974738   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.974799   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.980430   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:11.991569   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:11:12.002421   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.006666   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.006711   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.012305   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:12.023435   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:12.027428   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:11:12.027474   28654 kubeadm.go:392] StartCluster: {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:12.027535   28654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:11:12.027572   28654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:11:12.068414   28654 cri.go:89] found id: ""
	I1009 19:11:12.068473   28654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:11:12.078653   28654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:11:12.088659   28654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:11:12.098391   28654 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:11:12.098408   28654 kubeadm.go:157] found existing configuration files:
	
	I1009 19:11:12.098445   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:11:12.107757   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:11:12.107807   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:11:12.117369   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:11:12.126789   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:11:12.126847   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:11:12.136637   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:11:12.146308   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:11:12.146364   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:11:12.156469   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:11:12.165834   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:11:12.165886   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:11:12.175515   28654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 19:11:12.280177   28654 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 19:11:12.280255   28654 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 19:11:12.386423   28654 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:11:12.386621   28654 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:11:12.386752   28654 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:11:12.404964   28654 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:11:12.482162   28654 out.go:235]   - Generating certificates and keys ...
	I1009 19:11:12.482262   28654 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 19:11:12.482346   28654 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 19:11:12.648552   28654 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:11:12.833455   28654 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:11:13.055850   28654 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:11:13.322371   28654 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 19:11:13.484433   28654 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 19:11:13.484631   28654 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-199780 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I1009 19:11:13.583799   28654 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 19:11:13.584031   28654 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-199780 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I1009 19:11:14.090538   28654 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:11:14.260812   28654 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:11:14.391262   28654 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 19:11:14.391369   28654 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:11:14.744340   28654 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:11:14.834478   28654 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:11:14.925339   28654 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:11:15.080024   28654 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:11:15.271189   28654 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:11:15.271810   28654 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:11:15.277194   28654 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:11:15.369554   28654 out.go:235]   - Booting up control plane ...
	I1009 19:11:15.369723   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:11:15.369842   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:11:15.369937   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:11:15.370057   28654 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:11:15.370148   28654 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:11:15.370183   28654 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 19:11:15.445224   28654 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:11:15.445341   28654 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:11:16.448580   28654 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005128821s
	I1009 19:11:16.448662   28654 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 19:11:22.061566   28654 kubeadm.go:310] [api-check] The API server is healthy after 5.61687232s
	I1009 19:11:22.078904   28654 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:11:22.108560   28654 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:11:22.646139   28654 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:11:22.646344   28654 kubeadm.go:310] [mark-control-plane] Marking the node ha-199780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:11:22.657702   28654 kubeadm.go:310] [bootstrap-token] Using token: n3skeb.bws3ifw22cumajmm
	I1009 19:11:22.659119   28654 out.go:235]   - Configuring RBAC rules ...
	I1009 19:11:22.659267   28654 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:11:22.664574   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:11:22.677942   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:11:22.681624   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:11:22.685155   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:11:22.689541   28654 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:11:22.705080   28654 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:11:22.957052   28654 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 19:11:23.469842   28654 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 19:11:23.470871   28654 kubeadm.go:310] 
	I1009 19:11:23.470925   28654 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 19:11:23.470933   28654 kubeadm.go:310] 
	I1009 19:11:23.471051   28654 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 19:11:23.471083   28654 kubeadm.go:310] 
	I1009 19:11:23.471125   28654 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 19:11:23.471223   28654 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:11:23.471271   28654 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:11:23.471296   28654 kubeadm.go:310] 
	I1009 19:11:23.471380   28654 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 19:11:23.471393   28654 kubeadm.go:310] 
	I1009 19:11:23.471455   28654 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:11:23.471464   28654 kubeadm.go:310] 
	I1009 19:11:23.471537   28654 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 19:11:23.471641   28654 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:11:23.471738   28654 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:11:23.471753   28654 kubeadm.go:310] 
	I1009 19:11:23.471870   28654 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:11:23.471974   28654 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 19:11:23.471984   28654 kubeadm.go:310] 
	I1009 19:11:23.472086   28654 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n3skeb.bws3ifw22cumajmm \
	I1009 19:11:23.472234   28654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 19:11:23.472263   28654 kubeadm.go:310] 	--control-plane 
	I1009 19:11:23.472276   28654 kubeadm.go:310] 
	I1009 19:11:23.472382   28654 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:11:23.472392   28654 kubeadm.go:310] 
	I1009 19:11:23.472488   28654 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n3skeb.bws3ifw22cumajmm \
	I1009 19:11:23.472616   28654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 19:11:23.473525   28654 kubeadm.go:310] W1009 19:11:12.257145     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:11:23.473837   28654 kubeadm.go:310] W1009 19:11:12.259703     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:11:23.473994   28654 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:11:23.474033   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:11:23.474046   28654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:11:23.475963   28654 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 19:11:23.477363   28654 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:11:23.483529   28654 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 19:11:23.483553   28654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:11:23.504303   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:11:23.863157   28654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:11:23.863274   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:23.863284   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780 minikube.k8s.io/updated_at=2024_10_09T19_11_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=true
	I1009 19:11:23.884152   28654 ops.go:34] apiserver oom_adj: -16
	I1009 19:11:24.005714   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:24.506374   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:25.006091   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:25.506438   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:26.006141   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:26.506040   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.006400   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.505831   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.598386   28654 kubeadm.go:1113] duration metric: took 3.735177044s to wait for elevateKubeSystemPrivileges
	I1009 19:11:27.598425   28654 kubeadm.go:394] duration metric: took 15.5709527s to StartCluster
	I1009 19:11:27.598446   28654 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:27.598527   28654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:11:27.599166   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:27.599347   28654 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:27.599374   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:11:27.599357   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:11:27.599375   28654 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:11:27.599458   28654 addons.go:69] Setting storage-provisioner=true in profile "ha-199780"
	I1009 19:11:27.599469   28654 addons.go:69] Setting default-storageclass=true in profile "ha-199780"
	I1009 19:11:27.599477   28654 addons.go:234] Setting addon storage-provisioner=true in "ha-199780"
	I1009 19:11:27.599485   28654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-199780"
	I1009 19:11:27.599503   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:27.599506   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:27.599886   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.599927   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.599929   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.599968   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.614342   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I1009 19:11:27.614587   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I1009 19:11:27.614820   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.615004   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.615360   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.615381   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.615494   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.615521   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.615770   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.615869   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.615936   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.616437   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.616482   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.618027   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:11:27.618409   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:11:27.618933   28654 cert_rotation.go:140] Starting client certificate rotation controller
	I1009 19:11:27.619199   28654 addons.go:234] Setting addon default-storageclass=true in "ha-199780"
	I1009 19:11:27.619240   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:27.619589   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.619644   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.631880   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I1009 19:11:27.632439   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.632953   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.632968   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.633306   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.633511   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.633650   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I1009 19:11:27.634127   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.634757   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.634777   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.635148   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.635306   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:27.635705   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.635747   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.637278   28654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:11:27.638972   28654 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:11:27.638992   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:11:27.639008   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:27.642192   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.642642   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:27.642674   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.642796   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:27.642968   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:27.643174   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:27.643344   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:27.651531   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I1009 19:11:27.652010   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.652633   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.652663   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.652996   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.653186   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.654702   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:27.654903   28654 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:11:27.654916   28654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:11:27.654931   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:27.657462   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.657809   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:27.657834   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.657997   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:27.658162   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:27.658275   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:27.658409   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:27.708249   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:11:27.824778   28654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:11:27.831460   28654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:11:28.120955   28654 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1009 19:11:28.573087   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573114   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573134   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573150   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573505   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573520   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573544   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573545   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573557   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573510   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573628   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573649   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573658   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573565   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573900   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573917   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573930   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573931   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573940   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573984   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.574002   28654 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:11:28.574017   28654 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:11:28.574123   28654 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1009 19:11:28.574129   28654 round_trippers.go:469] Request Headers:
	I1009 19:11:28.574140   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:11:28.574147   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:11:28.586337   28654 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1009 19:11:28.587207   28654 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1009 19:11:28.587225   28654 round_trippers.go:469] Request Headers:
	I1009 19:11:28.587233   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:11:28.587241   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:11:28.587251   28654 round_trippers.go:473]     Content-Type: application/json
	I1009 19:11:28.594277   28654 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 19:11:28.594441   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.594457   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.594703   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.594721   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.596581   28654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1009 19:11:28.597699   28654 addons.go:510] duration metric: took 998.327173ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 19:11:28.597726   28654 start.go:246] waiting for cluster config update ...
	I1009 19:11:28.597735   28654 start.go:255] writing updated cluster config ...
	I1009 19:11:28.599169   28654 out.go:201] 
	I1009 19:11:28.600456   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:28.600538   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:28.601965   28654 out.go:177] * Starting "ha-199780-m02" control-plane node in "ha-199780" cluster
	I1009 19:11:28.602974   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:11:28.602993   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:11:28.603093   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:11:28.603107   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:11:28.603182   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:28.603350   28654 start.go:360] acquireMachinesLock for ha-199780-m02: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:11:28.603394   28654 start.go:364] duration metric: took 25.364µs to acquireMachinesLock for "ha-199780-m02"
	I1009 19:11:28.603415   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:28.603505   28654 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1009 19:11:28.604883   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:11:28.604963   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:28.604996   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:28.620174   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I1009 19:11:28.620709   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:28.621235   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:28.621259   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:28.621551   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:28.621737   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:28.621880   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:28.622077   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:11:28.622107   28654 client.go:168] LocalClient.Create starting
	I1009 19:11:28.622146   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:11:28.622193   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:11:28.622213   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:11:28.622278   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:11:28.622306   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:11:28.622322   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:11:28.622345   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:11:28.622356   28654 main.go:141] libmachine: (ha-199780-m02) Calling .PreCreateCheck
	I1009 19:11:28.622534   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:28.622992   28654 main.go:141] libmachine: Creating machine...
	I1009 19:11:28.623009   28654 main.go:141] libmachine: (ha-199780-m02) Calling .Create
	I1009 19:11:28.623202   28654 main.go:141] libmachine: (ha-199780-m02) Creating KVM machine...
	I1009 19:11:28.624414   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found existing default KVM network
	I1009 19:11:28.624553   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found existing private KVM network mk-ha-199780
	I1009 19:11:28.624697   28654 main.go:141] libmachine: (ha-199780-m02) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 ...
	I1009 19:11:28.624717   28654 main.go:141] libmachine: (ha-199780-m02) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:11:28.627180   28654 main.go:141] libmachine: (ha-199780-m02) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:11:28.627222   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:28.624673   29017 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:11:28.859004   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:28.858864   29017 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa...
	I1009 19:11:29.192250   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:29.192144   29017 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/ha-199780-m02.rawdisk...
	I1009 19:11:29.192281   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Writing magic tar header
	I1009 19:11:29.192291   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Writing SSH key tar header
	I1009 19:11:29.192299   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:29.192250   29017 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 ...
	I1009 19:11:29.192353   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02
	I1009 19:11:29.192372   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:11:29.192385   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 (perms=drwx------)
	I1009 19:11:29.192398   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:11:29.192410   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:11:29.192419   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:11:29.192426   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:11:29.192433   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home
	I1009 19:11:29.192451   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Skipping /home - not owner
	I1009 19:11:29.192471   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:11:29.192484   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:11:29.192493   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:11:29.192501   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:11:29.192508   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:11:29.192515   28654 main.go:141] libmachine: (ha-199780-m02) Creating domain...
	I1009 19:11:29.193313   28654 main.go:141] libmachine: (ha-199780-m02) define libvirt domain using xml: 
	I1009 19:11:29.193342   28654 main.go:141] libmachine: (ha-199780-m02) <domain type='kvm'>
	I1009 19:11:29.193353   28654 main.go:141] libmachine: (ha-199780-m02)   <name>ha-199780-m02</name>
	I1009 19:11:29.193360   28654 main.go:141] libmachine: (ha-199780-m02)   <memory unit='MiB'>2200</memory>
	I1009 19:11:29.193368   28654 main.go:141] libmachine: (ha-199780-m02)   <vcpu>2</vcpu>
	I1009 19:11:29.193381   28654 main.go:141] libmachine: (ha-199780-m02)   <features>
	I1009 19:11:29.193404   28654 main.go:141] libmachine: (ha-199780-m02)     <acpi/>
	I1009 19:11:29.193418   28654 main.go:141] libmachine: (ha-199780-m02)     <apic/>
	I1009 19:11:29.193448   28654 main.go:141] libmachine: (ha-199780-m02)     <pae/>
	I1009 19:11:29.193470   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193481   28654 main.go:141] libmachine: (ha-199780-m02)   </features>
	I1009 19:11:29.193502   28654 main.go:141] libmachine: (ha-199780-m02)   <cpu mode='host-passthrough'>
	I1009 19:11:29.193521   28654 main.go:141] libmachine: (ha-199780-m02)   
	I1009 19:11:29.193531   28654 main.go:141] libmachine: (ha-199780-m02)   </cpu>
	I1009 19:11:29.193548   28654 main.go:141] libmachine: (ha-199780-m02)   <os>
	I1009 19:11:29.193569   28654 main.go:141] libmachine: (ha-199780-m02)     <type>hvm</type>
	I1009 19:11:29.193584   28654 main.go:141] libmachine: (ha-199780-m02)     <boot dev='cdrom'/>
	I1009 19:11:29.193597   28654 main.go:141] libmachine: (ha-199780-m02)     <boot dev='hd'/>
	I1009 19:11:29.193605   28654 main.go:141] libmachine: (ha-199780-m02)     <bootmenu enable='no'/>
	I1009 19:11:29.193614   28654 main.go:141] libmachine: (ha-199780-m02)   </os>
	I1009 19:11:29.193622   28654 main.go:141] libmachine: (ha-199780-m02)   <devices>
	I1009 19:11:29.193631   28654 main.go:141] libmachine: (ha-199780-m02)     <disk type='file' device='cdrom'>
	I1009 19:11:29.193644   28654 main.go:141] libmachine: (ha-199780-m02)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/boot2docker.iso'/>
	I1009 19:11:29.193658   28654 main.go:141] libmachine: (ha-199780-m02)       <target dev='hdc' bus='scsi'/>
	I1009 19:11:29.193669   28654 main.go:141] libmachine: (ha-199780-m02)       <readonly/>
	I1009 19:11:29.193678   28654 main.go:141] libmachine: (ha-199780-m02)     </disk>
	I1009 19:11:29.193692   28654 main.go:141] libmachine: (ha-199780-m02)     <disk type='file' device='disk'>
	I1009 19:11:29.193703   28654 main.go:141] libmachine: (ha-199780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:11:29.193717   28654 main.go:141] libmachine: (ha-199780-m02)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/ha-199780-m02.rawdisk'/>
	I1009 19:11:29.193731   28654 main.go:141] libmachine: (ha-199780-m02)       <target dev='hda' bus='virtio'/>
	I1009 19:11:29.193743   28654 main.go:141] libmachine: (ha-199780-m02)     </disk>
	I1009 19:11:29.193752   28654 main.go:141] libmachine: (ha-199780-m02)     <interface type='network'>
	I1009 19:11:29.193764   28654 main.go:141] libmachine: (ha-199780-m02)       <source network='mk-ha-199780'/>
	I1009 19:11:29.193774   28654 main.go:141] libmachine: (ha-199780-m02)       <model type='virtio'/>
	I1009 19:11:29.193784   28654 main.go:141] libmachine: (ha-199780-m02)     </interface>
	I1009 19:11:29.193794   28654 main.go:141] libmachine: (ha-199780-m02)     <interface type='network'>
	I1009 19:11:29.193805   28654 main.go:141] libmachine: (ha-199780-m02)       <source network='default'/>
	I1009 19:11:29.193820   28654 main.go:141] libmachine: (ha-199780-m02)       <model type='virtio'/>
	I1009 19:11:29.193833   28654 main.go:141] libmachine: (ha-199780-m02)     </interface>
	I1009 19:11:29.193841   28654 main.go:141] libmachine: (ha-199780-m02)     <serial type='pty'>
	I1009 19:11:29.193855   28654 main.go:141] libmachine: (ha-199780-m02)       <target port='0'/>
	I1009 19:11:29.193865   28654 main.go:141] libmachine: (ha-199780-m02)     </serial>
	I1009 19:11:29.193871   28654 main.go:141] libmachine: (ha-199780-m02)     <console type='pty'>
	I1009 19:11:29.193881   28654 main.go:141] libmachine: (ha-199780-m02)       <target type='serial' port='0'/>
	I1009 19:11:29.193890   28654 main.go:141] libmachine: (ha-199780-m02)     </console>
	I1009 19:11:29.193901   28654 main.go:141] libmachine: (ha-199780-m02)     <rng model='virtio'>
	I1009 19:11:29.193911   28654 main.go:141] libmachine: (ha-199780-m02)       <backend model='random'>/dev/random</backend>
	I1009 19:11:29.193933   28654 main.go:141] libmachine: (ha-199780-m02)     </rng>
	I1009 19:11:29.193946   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193962   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193978   28654 main.go:141] libmachine: (ha-199780-m02)   </devices>
	I1009 19:11:29.193990   28654 main.go:141] libmachine: (ha-199780-m02) </domain>
	I1009 19:11:29.193999   28654 main.go:141] libmachine: (ha-199780-m02) 
	I1009 19:11:29.200233   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:9f:20:14 in network default
	I1009 19:11:29.200751   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring networks are active...
	I1009 19:11:29.200778   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:29.201355   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring network default is active
	I1009 19:11:29.201602   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring network mk-ha-199780 is active
	I1009 19:11:29.201876   28654 main.go:141] libmachine: (ha-199780-m02) Getting domain xml...
	I1009 19:11:29.202487   28654 main.go:141] libmachine: (ha-199780-m02) Creating domain...
	I1009 19:11:30.395985   28654 main.go:141] libmachine: (ha-199780-m02) Waiting to get IP...
	I1009 19:11:30.396850   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.397221   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.397245   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.397192   29017 retry.go:31] will retry after 306.623748ms: waiting for machine to come up
	I1009 19:11:30.705681   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.706111   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.706142   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.706073   29017 retry.go:31] will retry after 272.886306ms: waiting for machine to come up
	I1009 19:11:30.980636   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.981119   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.981146   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.981081   29017 retry.go:31] will retry after 373.250902ms: waiting for machine to come up
	I1009 19:11:31.355561   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:31.355953   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:31.355981   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:31.355905   29017 retry.go:31] will retry after 402.386513ms: waiting for machine to come up
	I1009 19:11:31.759650   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:31.760178   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:31.760204   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:31.760143   29017 retry.go:31] will retry after 700.718844ms: waiting for machine to come up
	I1009 19:11:32.462533   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:32.462970   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:32.462999   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:32.462916   29017 retry.go:31] will retry after 892.701908ms: waiting for machine to come up
	I1009 19:11:33.357278   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:33.357677   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:33.357700   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:33.357645   29017 retry.go:31] will retry after 892.900741ms: waiting for machine to come up
	I1009 19:11:34.252184   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:34.252581   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:34.252605   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:34.252542   29017 retry.go:31] will retry after 919.729577ms: waiting for machine to come up
	I1009 19:11:35.174060   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:35.174445   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:35.174475   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:35.174422   29017 retry.go:31] will retry after 1.688669614s: waiting for machine to come up
	I1009 19:11:36.865075   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:36.865384   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:36.865412   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:36.865340   29017 retry.go:31] will retry after 1.768384485s: waiting for machine to come up
	I1009 19:11:38.635106   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:38.635545   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:38.635574   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:38.635487   29017 retry.go:31] will retry after 2.193559284s: waiting for machine to come up
	I1009 19:11:40.831238   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:40.831740   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:40.831780   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:40.831709   29017 retry.go:31] will retry after 3.434402997s: waiting for machine to come up
	I1009 19:11:44.267146   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:44.267644   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:44.267671   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:44.267602   29017 retry.go:31] will retry after 4.164642466s: waiting for machine to come up
	I1009 19:11:48.436657   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:48.436991   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:48.437015   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:48.436952   29017 retry.go:31] will retry after 3.860630111s: waiting for machine to come up
	I1009 19:11:52.302118   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.302487   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has current primary IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.302554   28654 main.go:141] libmachine: (ha-199780-m02) Found IP for machine: 192.168.39.83
	I1009 19:11:52.302579   28654 main.go:141] libmachine: (ha-199780-m02) Reserving static IP address...
	I1009 19:11:52.302886   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find host DHCP lease matching {name: "ha-199780-m02", mac: "52:54:00:49:9d:cf", ip: "192.168.39.83"} in network mk-ha-199780
	I1009 19:11:52.372076   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Getting to WaitForSSH function...
	I1009 19:11:52.372102   28654 main.go:141] libmachine: (ha-199780-m02) Reserved static IP address: 192.168.39.83
	I1009 19:11:52.372115   28654 main.go:141] libmachine: (ha-199780-m02) Waiting for SSH to be available...
	I1009 19:11:52.374841   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.375419   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.375450   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.375560   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using SSH client type: external
	I1009 19:11:52.375580   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa (-rw-------)
	I1009 19:11:52.375612   28654 main.go:141] libmachine: (ha-199780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:11:52.375635   28654 main.go:141] libmachine: (ha-199780-m02) DBG | About to run SSH command:
	I1009 19:11:52.375646   28654 main.go:141] libmachine: (ha-199780-m02) DBG | exit 0
	I1009 19:11:52.498886   28654 main.go:141] libmachine: (ha-199780-m02) DBG | SSH cmd err, output: <nil>: 
	I1009 19:11:52.499168   28654 main.go:141] libmachine: (ha-199780-m02) KVM machine creation complete!
	I1009 19:11:52.499479   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:52.500069   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:52.500241   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:52.500393   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:11:52.500411   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetState
	I1009 19:11:52.501707   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:11:52.501728   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:11:52.501749   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:11:52.501756   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.503758   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.504142   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.504165   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.504286   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.504437   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.504575   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.504686   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.504794   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.504979   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.504989   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:11:52.602177   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:52.602204   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:11:52.602213   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.604728   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.605107   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.605141   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.605291   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.605469   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.605606   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.605724   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.605872   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.606034   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.606045   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:11:52.703707   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:11:52.703764   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:11:52.703771   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:11:52.703777   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.704032   28654 buildroot.go:166] provisioning hostname "ha-199780-m02"
	I1009 19:11:52.704060   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.704231   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.706798   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.707185   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.707208   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.707350   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.707510   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.707650   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.707773   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.707888   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.708063   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.708075   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780-m02 && echo "ha-199780-m02" | sudo tee /etc/hostname
	I1009 19:11:52.823258   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780-m02
	
	I1009 19:11:52.823287   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.825577   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.825861   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.825888   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.826053   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.826228   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.826361   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.826462   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.826604   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.826970   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.827005   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:52.936284   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:52.936322   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:11:52.936338   28654 buildroot.go:174] setting up certificates
	I1009 19:11:52.936349   28654 provision.go:84] configureAuth start
	I1009 19:11:52.936358   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.936621   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:52.939014   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.939357   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.939378   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.939565   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.941751   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.942083   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.942102   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.942262   28654 provision.go:143] copyHostCerts
	I1009 19:11:52.942292   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:52.942326   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:11:52.942335   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:52.942400   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:11:52.942490   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:52.942507   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:11:52.942513   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:52.942543   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:11:52.942586   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:52.942603   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:11:52.942608   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:52.942630   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:11:52.942675   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780-m02 san=[127.0.0.1 192.168.39.83 ha-199780-m02 localhost minikube]
	I1009 19:11:53.040172   28654 provision.go:177] copyRemoteCerts
	I1009 19:11:53.040224   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:53.040246   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.042771   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.043144   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.043165   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.043339   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.043536   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.043695   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.043830   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.125536   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:11:53.125611   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:11:53.152398   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:11:53.152462   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:11:53.176418   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:11:53.176476   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:11:53.199215   28654 provision.go:87] duration metric: took 262.855174ms to configureAuth
	I1009 19:11:53.199238   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:11:53.199408   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:53.199489   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.202051   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.202440   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.202470   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.202579   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.202742   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.202905   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.203044   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.203213   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:53.203367   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:53.203381   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:53.429894   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:53.429922   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:11:53.429933   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetURL
	I1009 19:11:53.431192   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using libvirt version 6000000
	I1009 19:11:53.433633   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.433917   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.433942   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.434095   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:11:53.434111   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:11:53.434119   28654 client.go:171] duration metric: took 24.812002035s to LocalClient.Create
	I1009 19:11:53.434141   28654 start.go:167] duration metric: took 24.812066243s to libmachine.API.Create "ha-199780"
	I1009 19:11:53.434153   28654 start.go:293] postStartSetup for "ha-199780-m02" (driver="kvm2")
	I1009 19:11:53.434164   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:53.434178   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.434386   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:53.434414   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.436444   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.436741   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.436766   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.436885   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.437048   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.437204   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.437329   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.517247   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:53.521546   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:11:53.521570   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:11:53.521628   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:11:53.521696   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:11:53.521706   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:11:53.521794   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:11:53.531170   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:53.555463   28654 start.go:296] duration metric: took 121.295956ms for postStartSetup
	I1009 19:11:53.555509   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:53.556089   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:53.558610   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.558965   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.558990   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.559241   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:53.559417   28654 start.go:128] duration metric: took 24.955894473s to createHost
	I1009 19:11:53.559436   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.561758   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.562120   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.562145   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.562297   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.562466   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.562603   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.562686   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.562800   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:53.562944   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:53.562953   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:11:53.659740   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501113.618380735
	
	I1009 19:11:53.659761   28654 fix.go:216] guest clock: 1728501113.618380735
	I1009 19:11:53.659770   28654 fix.go:229] Guest: 2024-10-09 19:11:53.618380735 +0000 UTC Remote: 2024-10-09 19:11:53.559427397 +0000 UTC m=+71.164621077 (delta=58.953338ms)
	I1009 19:11:53.659789   28654 fix.go:200] guest clock delta is within tolerance: 58.953338ms
	I1009 19:11:53.659795   28654 start.go:83] releasing machines lock for "ha-199780-m02", held for 25.056389443s
	I1009 19:11:53.659818   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.660047   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:53.662723   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.663038   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.663084   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.665166   28654 out.go:177] * Found network options:
	I1009 19:11:53.666287   28654 out.go:177]   - NO_PROXY=192.168.39.114
	W1009 19:11:53.667466   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:11:53.667505   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.667962   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.668130   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.668248   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:53.668296   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	W1009 19:11:53.668300   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:11:53.668381   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:53.668416   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.670930   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671210   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671283   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.671304   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671447   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.671527   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.671552   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671587   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.671735   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.671750   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.671893   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.671912   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.672014   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.672148   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.899517   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:53.905678   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:53.905741   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:53.922185   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:11:53.922206   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:11:53.922263   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:53.937820   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:53.953029   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:11:53.953091   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:53.967078   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:53.981025   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:54.113745   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:54.255530   28654 docker.go:233] disabling docker service ...
	I1009 19:11:54.255587   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:54.270170   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:54.283110   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:54.427830   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:54.542861   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:54.559019   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:54.577775   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:11:54.577834   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.588489   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:11:54.588563   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.598988   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.609116   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.619104   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:54.629621   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.640002   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.656572   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.666994   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:54.677176   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:11:54.677232   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:11:54.689637   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:54.698765   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:54.819897   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:54.911734   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:54.911789   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:54.916451   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:11:54.916494   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:11:54.920158   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:11:54.955402   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:11:54.955480   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:54.982980   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:55.012563   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:11:55.013723   28654 out.go:177]   - env NO_PROXY=192.168.39.114
	I1009 19:11:55.014768   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:55.017153   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:55.017506   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:55.017538   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:55.017692   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:55.021943   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:55.034196   28654 mustload.go:65] Loading cluster: ha-199780
	I1009 19:11:55.034432   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:55.034865   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:55.034912   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:55.049583   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I1009 19:11:55.050018   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:55.050467   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:55.050491   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:55.050776   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:55.050944   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:55.052331   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:55.052611   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:55.052643   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:55.066531   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I1009 19:11:55.066862   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:55.067348   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:55.067376   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:55.067659   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:55.067826   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:55.067945   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.83
	I1009 19:11:55.067956   28654 certs.go:194] generating shared ca certs ...
	I1009 19:11:55.067973   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.068103   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:11:55.068159   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:11:55.068171   28654 certs.go:256] generating profile certs ...
	I1009 19:11:55.068256   28654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:11:55.068286   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0
	I1009 19:11:55.068307   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.254]
	I1009 19:11:55.274614   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 ...
	I1009 19:11:55.274645   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0: {Name:mkea8c047205788ccead22201bc77c7190717cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.274816   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0 ...
	I1009 19:11:55.274832   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0: {Name:mk98b6fcd80ec856f6c63ddb6177c8a08e2dbf7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.274920   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:11:55.275082   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:11:55.275255   28654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:11:55.275273   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:11:55.275291   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:11:55.275308   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:11:55.275327   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:11:55.275347   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:11:55.275366   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:11:55.275383   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:11:55.275401   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:11:55.275466   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:11:55.275511   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:55.275524   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:11:55.275558   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:11:55.275590   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:55.275622   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:11:55.275679   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:55.275720   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.275740   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.275758   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.275797   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:55.278862   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:55.279369   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:55.279395   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:55.279612   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:55.279780   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:55.279952   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:55.280049   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:55.351381   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:11:55.355961   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:11:55.367055   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:11:55.371613   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:11:55.382154   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:11:55.386133   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:11:55.395984   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:11:55.399714   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1009 19:11:55.409621   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:11:55.413853   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:11:55.423766   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:11:55.427525   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1009 19:11:55.437575   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:55.462624   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:11:55.485719   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:55.508128   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:11:55.530803   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 19:11:55.555486   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:55.580139   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:55.603207   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:55.626373   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:11:55.649676   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:11:55.673656   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:55.696721   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:11:55.712647   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:11:55.728611   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:11:55.744619   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1009 19:11:55.760726   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:11:55.776763   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1009 19:11:55.792315   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:11:55.807929   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:11:55.813442   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:55.823376   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.827581   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.827627   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.833072   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:55.842843   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:11:55.852649   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.856766   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.856802   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.862146   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:55.872016   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:11:55.881805   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.885859   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.885905   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.891246   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:55.901096   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:55.904965   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:11:55.905009   28654 kubeadm.go:934] updating node {m02 192.168.39.83 8443 v1.31.1 crio true true} ...
	I1009 19:11:55.905077   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:55.905098   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:11:55.905121   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:11:55.919709   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:11:55.919759   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:11:55.919801   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:55.929228   28654 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1009 19:11:55.929276   28654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:55.938319   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1009 19:11:55.938340   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:11:55.938391   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:11:55.938402   28654 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1009 19:11:55.938404   28654 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1009 19:11:55.942635   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1009 19:11:55.942660   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1009 19:11:57.241263   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:11:57.255221   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:11:57.255304   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:11:57.259158   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1009 19:11:57.259186   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1009 19:11:57.547794   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:11:57.547883   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:11:57.562384   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1009 19:11:57.562426   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1009 19:11:57.842477   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:11:57.852027   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 19:11:57.867591   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:57.883108   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:11:57.898843   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:57.902642   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:57.914959   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:58.028127   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:58.044965   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:58.045423   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:58.045473   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:58.059986   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I1009 19:11:58.060458   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:58.060917   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:58.060934   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:58.061238   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:58.061410   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:58.061538   28654 start.go:317] joinCluster: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:58.061653   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 19:11:58.061673   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:58.064589   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:58.064969   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:58.064994   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:58.065152   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:58.065308   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:58.065538   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:58.065661   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:58.210321   28654 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:58.210383   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kfwmel.rmtx9gjzbnc80w0m --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m02 --control-plane --apiserver-advertise-address=192.168.39.83 --apiserver-bind-port=8443"
	I1009 19:12:19.134246   28654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kfwmel.rmtx9gjzbnc80w0m --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m02 --control-plane --apiserver-advertise-address=192.168.39.83 --apiserver-bind-port=8443": (20.923839028s)
	I1009 19:12:19.134290   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 19:12:19.605010   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780-m02 minikube.k8s.io/updated_at=2024_10_09T19_12_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=false
	I1009 19:12:19.748442   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-199780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1009 19:12:19.868185   28654 start.go:319] duration metric: took 21.806636434s to joinCluster
	I1009 19:12:19.868265   28654 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:12:19.868592   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:12:19.870842   28654 out.go:177] * Verifying Kubernetes components...
	I1009 19:12:19.872112   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:12:20.132051   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:12:20.184872   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:12:20.185127   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:12:20.185184   28654 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I1009 19:12:20.185366   28654 node_ready.go:35] waiting up to 6m0s for node "ha-199780-m02" to be "Ready" ...
	I1009 19:12:20.185447   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:20.185457   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:20.185464   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:20.185468   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:20.196121   28654 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1009 19:12:20.685641   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:20.685666   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:20.685677   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:20.685683   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:20.700948   28654 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1009 19:12:21.186360   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:21.186379   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:21.186386   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:21.186390   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:21.190077   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:21.686495   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:21.686523   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:21.686535   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:21.686542   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:21.689757   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:22.185915   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:22.185938   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:22.185949   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:22.185955   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:22.189220   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:22.189830   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:22.685885   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:22.685909   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:22.685925   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:22.685930   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:22.692565   28654 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 19:12:23.186131   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:23.186153   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:23.186163   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:23.186170   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:23.190703   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:23.685823   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:23.685851   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:23.685864   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:23.685874   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:23.689295   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:24.186259   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:24.186290   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:24.186302   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:24.186308   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:24.190419   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:24.190953   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:24.686386   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:24.686405   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:24.686412   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:24.686418   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:24.689349   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:25.186405   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:25.186431   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:25.186443   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:25.186448   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:25.189677   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:25.685894   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:25.685917   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:25.685930   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:25.685938   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:25.688721   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:26.185700   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:26.185718   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:26.185725   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:26.185729   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:26.189091   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:26.686200   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:26.686219   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:26.686227   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:26.686233   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:26.691177   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:26.691800   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:27.186166   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:27.186200   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:27.186216   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:27.186227   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:27.208799   28654 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1009 19:12:27.686569   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:27.686596   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:27.686606   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:27.686611   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:27.690120   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:28.186542   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:28.186562   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:28.186570   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:28.186574   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:28.189659   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:28.685814   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:28.685834   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:28.685842   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:28.685846   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:28.689015   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:29.185658   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:29.185692   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:29.185703   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:29.185708   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:29.188963   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:29.189656   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:29.686079   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:29.686104   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:29.686115   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:29.686119   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:29.689437   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:30.186344   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:30.186367   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:30.186378   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:30.186384   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:30.189946   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:30.685870   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:30.685896   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:30.685904   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:30.685909   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:30.689100   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:31.186316   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:31.186342   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:31.186351   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:31.186356   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:31.189992   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:31.190453   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:31.685857   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:31.685878   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:31.685886   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:31.685890   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:31.689411   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:32.186413   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:32.186439   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:32.186450   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:32.186457   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:32.189297   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:32.686105   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:32.686126   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:32.686134   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:32.686138   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:32.689698   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.185993   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:33.186015   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:33.186024   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:33.186028   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:33.189373   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.685932   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:33.685955   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:33.685963   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:33.685968   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:33.689670   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.690285   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:34.185640   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:34.185662   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:34.185670   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:34.185674   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:34.188694   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:34.686203   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:34.686223   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:34.686231   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:34.686235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:34.690146   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:35.185607   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:35.185628   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:35.185636   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:35.185640   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:35.188854   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:35.685726   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:35.685746   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:35.685759   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:35.685764   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:35.689172   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:36.186278   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:36.186301   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:36.186308   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:36.186312   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:36.189767   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:36.190519   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:36.685809   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:36.685841   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:36.685849   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:36.685853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:36.688923   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:37.185894   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:37.185920   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:37.185933   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:37.185940   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:37.189465   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:37.686197   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:37.686222   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:37.686230   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:37.686235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:37.689394   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.185922   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:38.185948   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:38.185956   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:38.185961   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:38.189255   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.685706   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:38.685729   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:38.685742   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:38.685751   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:38.689204   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.689971   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:39.186413   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.186433   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.186447   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.186452   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.189522   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.190154   28654 node_ready.go:49] node "ha-199780-m02" has status "Ready":"True"
	I1009 19:12:39.190172   28654 node_ready.go:38] duration metric: took 19.004790985s for node "ha-199780-m02" to be "Ready" ...
	I1009 19:12:39.190183   28654 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:12:39.190256   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:39.190268   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.190277   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.190292   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.194625   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:39.201057   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.201129   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-r8lg7
	I1009 19:12:39.201137   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.201144   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.201149   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.203552   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.204277   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.204291   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.204298   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.204303   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.206434   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.207017   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.207033   28654 pod_ready.go:82] duration metric: took 5.954504ms for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.207041   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.207118   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v5k75
	I1009 19:12:39.207128   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.207139   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.207148   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.209367   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.210180   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.210198   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.210204   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.210207   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.212254   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.212911   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.212929   28654 pod_ready.go:82] duration metric: took 5.881939ms for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.212939   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.212996   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780
	I1009 19:12:39.213004   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.213010   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.213014   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.215519   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.216198   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.216212   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.216222   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.216228   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.218680   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.219274   28654 pod_ready.go:93] pod "etcd-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.219293   28654 pod_ready.go:82] duration metric: took 6.345815ms for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.219306   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.219361   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m02
	I1009 19:12:39.219370   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.219379   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.219388   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.222905   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.223852   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.223867   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.223874   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.223880   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.226122   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.226546   28654 pod_ready.go:93] pod "etcd-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.226559   28654 pod_ready.go:82] duration metric: took 7.244216ms for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.226571   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.386954   28654 request.go:632] Waited for 160.312334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:12:39.387019   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:12:39.387028   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.387041   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.387059   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.390052   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.587135   28654 request.go:632] Waited for 196.31885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.587196   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.587203   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.587211   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.587219   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.590448   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.591164   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.591183   28654 pod_ready.go:82] duration metric: took 364.606313ms for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.591192   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.787247   28654 request.go:632] Waited for 195.987261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:12:39.787328   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:12:39.787335   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.787346   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.787354   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.790620   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.986772   28654 request.go:632] Waited for 195.363358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.986825   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.986830   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.986837   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.986840   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.990003   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.990664   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.990682   28654 pod_ready.go:82] duration metric: took 399.483816ms for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.990691   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.186433   28654 request.go:632] Waited for 195.681011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:12:40.186513   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:12:40.186524   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.186535   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.186544   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.189683   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.386818   28654 request.go:632] Waited for 196.355604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:40.386887   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:40.386893   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.386900   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.386905   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.391133   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:40.391614   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:40.391638   28654 pod_ready.go:82] duration metric: took 400.93972ms for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.391651   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.586680   28654 request.go:632] Waited for 194.949325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:12:40.586737   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:12:40.586742   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.586750   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.586755   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.590444   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.786422   28654 request.go:632] Waited for 195.280915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:40.786495   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:40.786501   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.786509   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.786513   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.790326   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.791006   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:40.791029   28654 pod_ready.go:82] duration metric: took 399.365639ms for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.791046   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.987070   28654 request.go:632] Waited for 195.933748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:12:40.987131   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:12:40.987136   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.987143   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.987147   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.990605   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.186624   28654 request.go:632] Waited for 195.268606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.186692   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.186704   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.186711   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.186715   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.189956   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.190470   28654 pod_ready.go:93] pod "kube-proxy-n8ffq" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.190489   28654 pod_ready.go:82] duration metric: took 399.435329ms for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.190501   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.386649   28654 request.go:632] Waited for 196.07336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:12:41.386701   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:12:41.386706   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.386713   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.386716   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.390032   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.587033   28654 request.go:632] Waited for 196.334104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:41.587126   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:41.587138   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.587149   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.587167   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.590021   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:41.590641   28654 pod_ready.go:93] pod "kube-proxy-zfsq8" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.590663   28654 pod_ready.go:82] duration metric: took 400.153892ms for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.590678   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.786648   28654 request.go:632] Waited for 195.890444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:12:41.786701   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:12:41.786708   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.786719   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.786729   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.789369   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:41.987345   28654 request.go:632] Waited for 197.361828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.987411   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.987416   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.987424   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.987427   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.990745   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.991278   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.991294   28654 pod_ready.go:82] duration metric: took 400.607782ms for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.991303   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:42.187413   28654 request.go:632] Waited for 196.036626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:12:42.187472   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:12:42.187478   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.187488   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.187495   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.190480   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:42.386422   28654 request.go:632] Waited for 195.271897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:42.386476   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:42.386482   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.386489   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.386493   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.389175   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:42.389733   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:42.389754   28654 pod_ready.go:82] duration metric: took 398.44435ms for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:42.389768   28654 pod_ready.go:39] duration metric: took 3.199572136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:12:42.389785   28654 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:12:42.389849   28654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:42.407811   28654 api_server.go:72] duration metric: took 22.539512335s to wait for apiserver process to appear ...
	I1009 19:12:42.407834   28654 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:12:42.407855   28654 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1009 19:12:42.414877   28654 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1009 19:12:42.414962   28654 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I1009 19:12:42.414974   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.414984   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.414991   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.416098   28654 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 19:12:42.416185   28654 api_server.go:141] control plane version: v1.31.1
	I1009 19:12:42.416202   28654 api_server.go:131] duration metric: took 8.360977ms to wait for apiserver health ...
	I1009 19:12:42.416212   28654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:12:42.587017   28654 request.go:632] Waited for 170.742751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.587127   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.587142   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.587151   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.587157   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.592323   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:12:42.596935   28654 system_pods.go:59] 17 kube-system pods found
	I1009 19:12:42.596960   28654 system_pods.go:61] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:12:42.596966   28654 system_pods.go:61] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:12:42.596971   28654 system_pods.go:61] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:12:42.596974   28654 system_pods.go:61] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:12:42.596977   28654 system_pods.go:61] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:12:42.596980   28654 system_pods.go:61] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:12:42.596983   28654 system_pods.go:61] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:12:42.596991   28654 system_pods.go:61] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:12:42.596995   28654 system_pods.go:61] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:12:42.597000   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:12:42.597004   28654 system_pods.go:61] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:12:42.597007   28654 system_pods.go:61] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:12:42.597011   28654 system_pods.go:61] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:12:42.597015   28654 system_pods.go:61] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:12:42.597018   28654 system_pods.go:61] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:12:42.597023   28654 system_pods.go:61] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:12:42.597026   28654 system_pods.go:61] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:12:42.597031   28654 system_pods.go:74] duration metric: took 180.813466ms to wait for pod list to return data ...
	I1009 19:12:42.597039   28654 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:12:42.787461   28654 request.go:632] Waited for 190.355387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:12:42.787510   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:12:42.787515   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.787523   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.787526   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.791707   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:42.791908   28654 default_sa.go:45] found service account: "default"
	I1009 19:12:42.791921   28654 default_sa.go:55] duration metric: took 194.876803ms for default service account to be created ...
	I1009 19:12:42.791929   28654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:12:42.987347   28654 request.go:632] Waited for 195.347718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.987402   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.987407   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.987415   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.987418   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.992125   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:42.996490   28654 system_pods.go:86] 17 kube-system pods found
	I1009 19:12:42.996520   28654 system_pods.go:89] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:12:42.996536   28654 system_pods.go:89] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:12:42.996541   28654 system_pods.go:89] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:12:42.996545   28654 system_pods.go:89] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:12:42.996552   28654 system_pods.go:89] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:12:42.996564   28654 system_pods.go:89] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:12:42.996567   28654 system_pods.go:89] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:12:42.996571   28654 system_pods.go:89] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:12:42.996576   28654 system_pods.go:89] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:12:42.996580   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:12:42.996583   28654 system_pods.go:89] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:12:42.996587   28654 system_pods.go:89] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:12:42.996591   28654 system_pods.go:89] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:12:42.996594   28654 system_pods.go:89] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:12:42.996598   28654 system_pods.go:89] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:12:42.996603   28654 system_pods.go:89] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:12:42.996605   28654 system_pods.go:89] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:12:42.996612   28654 system_pods.go:126] duration metric: took 204.678176ms to wait for k8s-apps to be running ...
	I1009 19:12:42.996621   28654 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:12:42.996661   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:12:43.012943   28654 system_svc.go:56] duration metric: took 16.312977ms WaitForService to wait for kubelet
	I1009 19:12:43.012964   28654 kubeadm.go:582] duration metric: took 23.14466791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:12:43.012979   28654 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:12:43.186683   28654 request.go:632] Waited for 173.643549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I1009 19:12:43.186731   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I1009 19:12:43.186737   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:43.186744   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:43.186750   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:43.190743   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:43.191568   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:12:43.191597   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:12:43.191608   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:12:43.191612   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:12:43.191618   28654 node_conditions.go:105] duration metric: took 178.633815ms to run NodePressure ...
	I1009 19:12:43.191635   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:12:43.191663   28654 start.go:255] writing updated cluster config ...
	I1009 19:12:43.193878   28654 out.go:201] 
	I1009 19:12:43.195204   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:12:43.195296   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:12:43.196947   28654 out.go:177] * Starting "ha-199780-m03" control-plane node in "ha-199780" cluster
	I1009 19:12:43.198242   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:12:43.198257   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:12:43.198354   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:12:43.198368   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:12:43.198453   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:12:43.198644   28654 start.go:360] acquireMachinesLock for ha-199780-m03: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:12:43.198693   28654 start.go:364] duration metric: took 30.243µs to acquireMachinesLock for "ha-199780-m03"
	I1009 19:12:43.198715   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:12:43.198839   28654 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1009 19:12:43.200292   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:12:43.200365   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:12:43.200395   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:12:43.215501   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I1009 19:12:43.215883   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:12:43.216432   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:12:43.216461   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:12:43.216780   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:12:43.216973   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:12:43.217128   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:12:43.217269   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:12:43.217296   28654 client.go:168] LocalClient.Create starting
	I1009 19:12:43.217327   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:12:43.217360   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:12:43.217379   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:12:43.217439   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:12:43.217464   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:12:43.217486   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:12:43.217518   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:12:43.217529   28654 main.go:141] libmachine: (ha-199780-m03) Calling .PreCreateCheck
	I1009 19:12:43.217680   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:12:43.218031   28654 main.go:141] libmachine: Creating machine...
	I1009 19:12:43.218043   28654 main.go:141] libmachine: (ha-199780-m03) Calling .Create
	I1009 19:12:43.218158   28654 main.go:141] libmachine: (ha-199780-m03) Creating KVM machine...
	I1009 19:12:43.219370   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found existing default KVM network
	I1009 19:12:43.219545   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found existing private KVM network mk-ha-199780
	I1009 19:12:43.219670   28654 main.go:141] libmachine: (ha-199780-m03) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 ...
	I1009 19:12:43.219694   28654 main.go:141] libmachine: (ha-199780-m03) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:12:43.219770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.219647   29426 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:12:43.219839   28654 main.go:141] libmachine: (ha-199780-m03) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:12:43.456571   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.456478   29426 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa...
	I1009 19:12:43.637087   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.637007   29426 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/ha-199780-m03.rawdisk...
	I1009 19:12:43.637111   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Writing magic tar header
	I1009 19:12:43.637123   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Writing SSH key tar header
	I1009 19:12:43.637132   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.637111   29426 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 ...
	I1009 19:12:43.637237   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03
	I1009 19:12:43.637256   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 (perms=drwx------)
	I1009 19:12:43.637263   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:12:43.637277   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:12:43.637285   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:12:43.637293   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:12:43.637301   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:12:43.637308   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:12:43.637313   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home
	I1009 19:12:43.637322   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Skipping /home - not owner
	I1009 19:12:43.637330   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:12:43.637338   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:12:43.637345   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:12:43.637355   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:12:43.637364   28654 main.go:141] libmachine: (ha-199780-m03) Creating domain...
	I1009 19:12:43.638194   28654 main.go:141] libmachine: (ha-199780-m03) define libvirt domain using xml: 
	I1009 19:12:43.638216   28654 main.go:141] libmachine: (ha-199780-m03) <domain type='kvm'>
	I1009 19:12:43.638226   28654 main.go:141] libmachine: (ha-199780-m03)   <name>ha-199780-m03</name>
	I1009 19:12:43.638239   28654 main.go:141] libmachine: (ha-199780-m03)   <memory unit='MiB'>2200</memory>
	I1009 19:12:43.638251   28654 main.go:141] libmachine: (ha-199780-m03)   <vcpu>2</vcpu>
	I1009 19:12:43.638258   28654 main.go:141] libmachine: (ha-199780-m03)   <features>
	I1009 19:12:43.638266   28654 main.go:141] libmachine: (ha-199780-m03)     <acpi/>
	I1009 19:12:43.638275   28654 main.go:141] libmachine: (ha-199780-m03)     <apic/>
	I1009 19:12:43.638288   28654 main.go:141] libmachine: (ha-199780-m03)     <pae/>
	I1009 19:12:43.638296   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638304   28654 main.go:141] libmachine: (ha-199780-m03)   </features>
	I1009 19:12:43.638314   28654 main.go:141] libmachine: (ha-199780-m03)   <cpu mode='host-passthrough'>
	I1009 19:12:43.638338   28654 main.go:141] libmachine: (ha-199780-m03)   
	I1009 19:12:43.638360   28654 main.go:141] libmachine: (ha-199780-m03)   </cpu>
	I1009 19:12:43.638375   28654 main.go:141] libmachine: (ha-199780-m03)   <os>
	I1009 19:12:43.638386   28654 main.go:141] libmachine: (ha-199780-m03)     <type>hvm</type>
	I1009 19:12:43.638397   28654 main.go:141] libmachine: (ha-199780-m03)     <boot dev='cdrom'/>
	I1009 19:12:43.638406   28654 main.go:141] libmachine: (ha-199780-m03)     <boot dev='hd'/>
	I1009 19:12:43.638416   28654 main.go:141] libmachine: (ha-199780-m03)     <bootmenu enable='no'/>
	I1009 19:12:43.638425   28654 main.go:141] libmachine: (ha-199780-m03)   </os>
	I1009 19:12:43.638435   28654 main.go:141] libmachine: (ha-199780-m03)   <devices>
	I1009 19:12:43.638451   28654 main.go:141] libmachine: (ha-199780-m03)     <disk type='file' device='cdrom'>
	I1009 19:12:43.638468   28654 main.go:141] libmachine: (ha-199780-m03)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/boot2docker.iso'/>
	I1009 19:12:43.638480   28654 main.go:141] libmachine: (ha-199780-m03)       <target dev='hdc' bus='scsi'/>
	I1009 19:12:43.638491   28654 main.go:141] libmachine: (ha-199780-m03)       <readonly/>
	I1009 19:12:43.638498   28654 main.go:141] libmachine: (ha-199780-m03)     </disk>
	I1009 19:12:43.638511   28654 main.go:141] libmachine: (ha-199780-m03)     <disk type='file' device='disk'>
	I1009 19:12:43.638529   28654 main.go:141] libmachine: (ha-199780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:12:43.638545   28654 main.go:141] libmachine: (ha-199780-m03)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/ha-199780-m03.rawdisk'/>
	I1009 19:12:43.638557   28654 main.go:141] libmachine: (ha-199780-m03)       <target dev='hda' bus='virtio'/>
	I1009 19:12:43.638566   28654 main.go:141] libmachine: (ha-199780-m03)     </disk>
	I1009 19:12:43.638575   28654 main.go:141] libmachine: (ha-199780-m03)     <interface type='network'>
	I1009 19:12:43.638585   28654 main.go:141] libmachine: (ha-199780-m03)       <source network='mk-ha-199780'/>
	I1009 19:12:43.638600   28654 main.go:141] libmachine: (ha-199780-m03)       <model type='virtio'/>
	I1009 19:12:43.638613   28654 main.go:141] libmachine: (ha-199780-m03)     </interface>
	I1009 19:12:43.638624   28654 main.go:141] libmachine: (ha-199780-m03)     <interface type='network'>
	I1009 19:12:43.638637   28654 main.go:141] libmachine: (ha-199780-m03)       <source network='default'/>
	I1009 19:12:43.638647   28654 main.go:141] libmachine: (ha-199780-m03)       <model type='virtio'/>
	I1009 19:12:43.638658   28654 main.go:141] libmachine: (ha-199780-m03)     </interface>
	I1009 19:12:43.638665   28654 main.go:141] libmachine: (ha-199780-m03)     <serial type='pty'>
	I1009 19:12:43.638685   28654 main.go:141] libmachine: (ha-199780-m03)       <target port='0'/>
	I1009 19:12:43.638701   28654 main.go:141] libmachine: (ha-199780-m03)     </serial>
	I1009 19:12:43.638713   28654 main.go:141] libmachine: (ha-199780-m03)     <console type='pty'>
	I1009 19:12:43.638724   28654 main.go:141] libmachine: (ha-199780-m03)       <target type='serial' port='0'/>
	I1009 19:12:43.638734   28654 main.go:141] libmachine: (ha-199780-m03)     </console>
	I1009 19:12:43.638742   28654 main.go:141] libmachine: (ha-199780-m03)     <rng model='virtio'>
	I1009 19:12:43.638760   28654 main.go:141] libmachine: (ha-199780-m03)       <backend model='random'>/dev/random</backend>
	I1009 19:12:43.638775   28654 main.go:141] libmachine: (ha-199780-m03)     </rng>
	I1009 19:12:43.638786   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638796   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638812   28654 main.go:141] libmachine: (ha-199780-m03)   </devices>
	I1009 19:12:43.638828   28654 main.go:141] libmachine: (ha-199780-m03) </domain>
	I1009 19:12:43.638836   28654 main.go:141] libmachine: (ha-199780-m03) 
	I1009 19:12:43.645429   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:1f:d1:3b in network default
	I1009 19:12:43.645983   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:43.646001   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring networks are active...
	I1009 19:12:43.646747   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring network default is active
	I1009 19:12:43.647149   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring network mk-ha-199780 is active
	I1009 19:12:43.647523   28654 main.go:141] libmachine: (ha-199780-m03) Getting domain xml...
	I1009 19:12:43.648287   28654 main.go:141] libmachine: (ha-199780-m03) Creating domain...
	I1009 19:12:44.847549   28654 main.go:141] libmachine: (ha-199780-m03) Waiting to get IP...
	I1009 19:12:44.848392   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:44.848787   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:44.848829   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:44.848770   29426 retry.go:31] will retry after 229.997293ms: waiting for machine to come up
	I1009 19:12:45.079971   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.080455   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.080486   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.080421   29426 retry.go:31] will retry after 304.992826ms: waiting for machine to come up
	I1009 19:12:45.386902   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.387362   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.387386   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.387322   29426 retry.go:31] will retry after 327.958718ms: waiting for machine to come up
	I1009 19:12:45.716733   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.717214   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.717239   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.717174   29426 retry.go:31] will retry after 508.576077ms: waiting for machine to come up
	I1009 19:12:46.227904   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:46.228327   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:46.228353   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:46.228287   29426 retry.go:31] will retry after 585.555609ms: waiting for machine to come up
	I1009 19:12:46.814896   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:46.815296   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:46.815326   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:46.815257   29426 retry.go:31] will retry after 940.877771ms: waiting for machine to come up
	I1009 19:12:47.757334   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:47.757733   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:47.757767   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:47.757680   29426 retry.go:31] will retry after 1.078987913s: waiting for machine to come up
	I1009 19:12:48.838156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:48.838584   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:48.838612   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:48.838534   29426 retry.go:31] will retry after 1.204337562s: waiting for machine to come up
	I1009 19:12:50.044036   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:50.044425   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:50.044447   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:50.044387   29426 retry.go:31] will retry after 1.424565558s: waiting for machine to come up
	I1009 19:12:51.470825   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:51.471291   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:51.471328   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:51.471250   29426 retry.go:31] will retry after 1.95975676s: waiting for machine to come up
	I1009 19:12:53.432604   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:53.433116   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:53.433142   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:53.433070   29426 retry.go:31] will retry after 2.780245822s: waiting for machine to come up
	I1009 19:12:56.216025   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:56.216374   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:56.216395   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:56.216337   29426 retry.go:31] will retry after 3.28653641s: waiting for machine to come up
	I1009 19:12:59.504791   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:59.505156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:59.505184   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:59.505128   29426 retry.go:31] will retry after 4.186849932s: waiting for machine to come up
	I1009 19:13:03.693337   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:03.693747   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:13:03.693770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:13:03.693703   29426 retry.go:31] will retry after 5.146937605s: waiting for machine to come up
	I1009 19:13:08.842460   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.842868   28654 main.go:141] libmachine: (ha-199780-m03) Found IP for machine: 192.168.39.84
	I1009 19:13:08.842887   28654 main.go:141] libmachine: (ha-199780-m03) Reserving static IP address...
	I1009 19:13:08.842896   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.843320   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find host DHCP lease matching {name: "ha-199780-m03", mac: "52:54:00:15:92:44", ip: "192.168.39.84"} in network mk-ha-199780
	I1009 19:13:08.913543   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Getting to WaitForSSH function...
	I1009 19:13:08.913573   28654 main.go:141] libmachine: (ha-199780-m03) Reserved static IP address: 192.168.39.84
	I1009 19:13:08.913586   28654 main.go:141] libmachine: (ha-199780-m03) Waiting for SSH to be available...
	I1009 19:13:08.916270   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.916658   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:92:44}
	I1009 19:13:08.916682   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.916805   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using SSH client type: external
	I1009 19:13:08.916829   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa (-rw-------)
	I1009 19:13:08.916873   28654 main.go:141] libmachine: (ha-199780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:13:08.916898   28654 main.go:141] libmachine: (ha-199780-m03) DBG | About to run SSH command:
	I1009 19:13:08.916914   28654 main.go:141] libmachine: (ha-199780-m03) DBG | exit 0
	I1009 19:13:09.046941   28654 main.go:141] libmachine: (ha-199780-m03) DBG | SSH cmd err, output: <nil>: 
	I1009 19:13:09.047218   28654 main.go:141] libmachine: (ha-199780-m03) KVM machine creation complete!
	I1009 19:13:09.047540   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:13:09.048076   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:09.048290   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:09.048435   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:13:09.048449   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetState
	I1009 19:13:09.049768   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:13:09.049784   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:13:09.049792   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:13:09.049800   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.051899   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.052232   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.052256   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.052390   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.052558   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.052690   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.052792   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.052919   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.053134   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.053146   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:13:09.162161   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:13:09.162193   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:13:09.162204   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.165282   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.165740   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.165770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.165998   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.166189   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.166372   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.166511   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.166658   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.166820   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.166830   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:13:09.279803   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:13:09.279876   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:13:09.279888   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:13:09.279896   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.280130   28654 buildroot.go:166] provisioning hostname "ha-199780-m03"
	I1009 19:13:09.280155   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.280355   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.282543   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.282879   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.282903   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.283031   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.283188   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.283335   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.283479   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.283637   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.283800   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.283813   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780-m03 && echo "ha-199780-m03" | sudo tee /etc/hostname
	I1009 19:13:09.410249   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780-m03
	
	I1009 19:13:09.410286   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.413156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.413573   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.413597   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.413831   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.414036   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.414189   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.414350   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.414484   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.414653   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.414676   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:13:09.536419   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:13:09.536443   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:13:09.536456   28654 buildroot.go:174] setting up certificates
	I1009 19:13:09.536466   28654 provision.go:84] configureAuth start
	I1009 19:13:09.536474   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.536766   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:09.539383   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.539742   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.539769   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.539905   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.542068   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.542398   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.542433   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.542583   28654 provision.go:143] copyHostCerts
	I1009 19:13:09.542606   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:13:09.542633   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:13:09.542642   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:13:09.542706   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:13:09.542776   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:13:09.542794   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:13:09.542798   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:13:09.542825   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:13:09.542870   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:13:09.542886   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:13:09.542891   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:13:09.542910   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:13:09.542956   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780-m03 san=[127.0.0.1 192.168.39.84 ha-199780-m03 localhost minikube]
	I1009 19:13:09.606712   28654 provision.go:177] copyRemoteCerts
	I1009 19:13:09.606761   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:13:09.606781   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.609303   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.609661   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.609689   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.609868   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.610022   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.610145   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.610298   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:09.696779   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:13:09.696841   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:13:09.720751   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:13:09.720811   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:13:09.744059   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:13:09.744114   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:13:09.767833   28654 provision.go:87] duration metric: took 231.356763ms to configureAuth
	I1009 19:13:09.767867   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:13:09.768111   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:09.768195   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.770602   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.770927   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.770956   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.771124   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.771314   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.771473   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.771621   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.771780   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.771973   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.772002   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:13:09.999632   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:13:09.999662   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:13:09.999673   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetURL
	I1009 19:13:10.001043   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using libvirt version 6000000
	I1009 19:13:10.002982   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.003339   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.003364   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.003485   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:13:10.003499   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:13:10.003506   28654 client.go:171] duration metric: took 26.786200346s to LocalClient.Create
	I1009 19:13:10.003528   28654 start.go:167] duration metric: took 26.786259048s to libmachine.API.Create "ha-199780"
	I1009 19:13:10.003541   28654 start.go:293] postStartSetup for "ha-199780-m03" (driver="kvm2")
	I1009 19:13:10.003557   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:13:10.003580   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.003751   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:13:10.003777   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.005954   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.006305   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.006342   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.006472   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.006621   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.006781   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.006914   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.097042   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:13:10.101538   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:13:10.101559   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:13:10.101628   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:13:10.101716   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:13:10.101727   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:13:10.101831   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:13:10.111544   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:13:10.138321   28654 start.go:296] duration metric: took 134.764482ms for postStartSetup
	I1009 19:13:10.138362   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:13:10.138886   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:10.141464   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.141752   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.141798   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.142045   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:13:10.142239   28654 start.go:128] duration metric: took 26.94338984s to createHost
	I1009 19:13:10.142260   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.144573   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.144860   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.144895   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.145048   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.145233   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.145397   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.145561   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.145727   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:10.145915   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:10.145928   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:13:10.259958   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501190.239755663
	
	I1009 19:13:10.259981   28654 fix.go:216] guest clock: 1728501190.239755663
	I1009 19:13:10.259990   28654 fix.go:229] Guest: 2024-10-09 19:13:10.239755663 +0000 UTC Remote: 2024-10-09 19:13:10.142249873 +0000 UTC m=+147.747443556 (delta=97.50579ms)
	I1009 19:13:10.260009   28654 fix.go:200] guest clock delta is within tolerance: 97.50579ms
	I1009 19:13:10.260014   28654 start.go:83] releasing machines lock for "ha-199780-m03", held for 27.061310572s
	I1009 19:13:10.260031   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.260248   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:10.262692   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.263042   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.263090   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.265368   28654 out.go:177] * Found network options:
	I1009 19:13:10.266603   28654 out.go:177]   - NO_PROXY=192.168.39.114,192.168.39.83
	W1009 19:13:10.267719   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 19:13:10.267740   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:13:10.267752   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268176   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268354   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268457   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:13:10.268495   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	W1009 19:13:10.268522   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 19:13:10.268539   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:13:10.268607   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:13:10.268629   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.271001   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271378   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.271413   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271433   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271563   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.271675   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.271760   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.271841   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.271883   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.271905   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.272050   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.272201   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.272349   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.272499   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.509806   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:13:10.515665   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:13:10.515723   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:13:10.534296   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:13:10.534319   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:13:10.534372   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:13:10.550041   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:13:10.563633   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:13:10.563683   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:13:10.577637   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:13:10.592588   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:13:10.712305   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:13:10.879292   28654 docker.go:233] disabling docker service ...
	I1009 19:13:10.879381   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:13:10.894134   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:13:10.907059   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:13:11.025068   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:13:11.146057   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:13:11.160573   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:13:11.181994   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:13:11.182045   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.191765   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:13:11.191812   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.201883   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.212073   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.222390   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:13:11.232857   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.243298   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.262217   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.272906   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:13:11.282747   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:13:11.282797   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:13:11.296609   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:13:11.306096   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:11.423441   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:13:11.515740   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:13:11.515821   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:13:11.520647   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:13:11.520700   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:13:11.524288   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:13:11.564050   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:13:11.564119   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:13:11.592463   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:13:11.620536   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:13:11.622484   28654 out.go:177]   - env NO_PROXY=192.168.39.114
	I1009 19:13:11.623769   28654 out.go:177]   - env NO_PROXY=192.168.39.114,192.168.39.83
	I1009 19:13:11.624794   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:11.627494   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:11.627836   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:11.627861   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:11.628050   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:13:11.632057   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:13:11.644307   28654 mustload.go:65] Loading cluster: ha-199780
	I1009 19:13:11.644526   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:11.644823   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:11.644864   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:11.660098   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I1009 19:13:11.660500   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:11.660929   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:11.660963   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:11.661312   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:11.661490   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:13:11.662965   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:13:11.663268   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:11.663304   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:11.677584   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I1009 19:13:11.678002   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:11.678412   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:11.678433   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:11.678716   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:11.678874   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:13:11.678992   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.84
	I1009 19:13:11.679002   28654 certs.go:194] generating shared ca certs ...
	I1009 19:13:11.679014   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.679142   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:13:11.679180   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:13:11.679190   28654 certs.go:256] generating profile certs ...
	I1009 19:13:11.679253   28654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:13:11.679275   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8
	I1009 19:13:11.679293   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.84 192.168.39.254]
	I1009 19:13:11.751003   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 ...
	I1009 19:13:11.751029   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8: {Name:mkf155e8357b65010528843e053f2a71f20ad105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.751190   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8 ...
	I1009 19:13:11.751202   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8: {Name:mk6ff6d5eec7167bd850e69dc06edb50691eb6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.751267   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:13:11.751393   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:13:11.751509   28654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:13:11.751523   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:13:11.751535   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:13:11.751550   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:13:11.751563   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:13:11.751576   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:13:11.751588   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:13:11.751600   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:13:11.771159   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:13:11.771229   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:13:11.771259   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:13:11.771269   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:13:11.771293   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:13:11.771314   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:13:11.771335   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:13:11.771370   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:13:11.771395   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:13:11.771408   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:11.771420   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:13:11.771451   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:13:11.774438   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:11.774845   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:13:11.774865   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:11.775017   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:13:11.775204   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:13:11.775350   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:13:11.775478   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:13:11.851359   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:13:11.856664   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:13:11.868123   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:13:11.875260   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:13:11.887341   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:13:11.891724   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:13:11.902332   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:13:11.906621   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1009 19:13:11.916908   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:13:11.921562   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:13:11.931584   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:13:11.935971   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1009 19:13:11.946941   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:13:11.972757   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:13:11.996080   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:13:12.019624   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:13:12.042711   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1009 19:13:12.067239   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:13:12.094118   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:13:12.120234   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:13:12.143055   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:13:12.165868   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:13:12.188853   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:13:12.211293   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:13:12.227623   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:13:12.243623   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:13:12.260811   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1009 19:13:12.278131   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:13:12.295237   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1009 19:13:12.312441   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:13:12.328516   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:13:12.334428   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:13:12.345201   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.349589   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.349627   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.355741   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:13:12.366097   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:13:12.376756   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.381423   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.381474   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.387265   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:13:12.398550   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:13:12.410065   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.414879   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.414939   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.420521   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:13:12.431459   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:13:12.435599   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:13:12.435653   28654 kubeadm.go:934] updating node {m03 192.168.39.84 8443 v1.31.1 crio true true} ...
	I1009 19:13:12.435745   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:13:12.435776   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:13:12.435816   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:13:12.450815   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:13:12.450880   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:13:12.450927   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:13:12.462732   28654 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1009 19:13:12.462797   28654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1009 19:13:12.473333   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1009 19:13:12.473358   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:13:12.473356   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1009 19:13:12.473375   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:13:12.473392   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1009 19:13:12.473419   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:13:12.473431   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:13:12.473433   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:13:12.484568   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1009 19:13:12.484600   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1009 19:13:12.496090   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:13:12.496156   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1009 19:13:12.496169   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:13:12.496179   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1009 19:13:12.547231   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1009 19:13:12.547271   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1009 19:13:13.298298   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:13:13.308347   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 19:13:13.325500   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:13:13.341701   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:13:13.358009   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:13:13.361852   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:13:13.374963   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:13.498686   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:13:13.518977   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:13:13.519473   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:13.519531   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:13.538200   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I1009 19:13:13.538624   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:13.539117   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:13.539147   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:13.539481   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:13.539662   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:13:13.539788   28654 start.go:317] joinCluster: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:13:13.539943   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 19:13:13.539967   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:13:13.542836   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:13.543274   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:13:13.543303   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:13.543418   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:13:13.543577   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:13:13.543722   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:13:13.543861   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:13:13.700075   28654 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:13:13.700122   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d41j1t.hzhz2w4cpck4u6sv --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I1009 19:13:36.009706   28654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d41j1t.hzhz2w4cpck4u6sv --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (22.309560416s)
	I1009 19:13:36.009741   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 19:13:36.574647   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780-m03 minikube.k8s.io/updated_at=2024_10_09T19_13_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=false
	I1009 19:13:36.718344   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-199780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1009 19:13:36.828582   28654 start.go:319] duration metric: took 23.288789983s to joinCluster
	I1009 19:13:36.828663   28654 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:13:36.828971   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:36.830104   28654 out.go:177] * Verifying Kubernetes components...
	I1009 19:13:36.831350   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:37.149519   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:13:37.192508   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:13:37.192892   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:13:37.192972   28654 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I1009 19:13:37.193248   28654 node_ready.go:35] waiting up to 6m0s for node "ha-199780-m03" to be "Ready" ...
	I1009 19:13:37.193328   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:37.193338   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:37.193350   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:37.193359   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:37.197001   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:37.693747   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:37.693768   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:37.693780   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:37.693785   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:37.697648   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:38.193891   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:38.193913   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:38.193924   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:38.193929   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:38.197274   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:38.693429   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:38.693457   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:38.693469   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:38.693474   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:38.696864   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:39.193488   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:39.193508   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:39.193514   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:39.193519   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:39.196227   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:39.196768   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:39.694269   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:39.694294   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:39.694306   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:39.694313   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:39.697293   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:40.193909   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:40.193938   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:40.193948   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:40.193953   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:40.197226   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:40.693770   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:40.693793   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:40.693804   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:40.693809   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:40.697070   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:41.194260   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:41.194282   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:41.194291   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:41.194295   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:41.197138   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:41.197715   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:41.694049   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:41.694075   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:41.694087   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:41.694094   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:41.697134   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:42.194287   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:42.194311   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:42.194321   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:42.194327   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:42.197589   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:42.693552   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:42.693571   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:42.693581   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:42.693588   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:42.696963   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:43.193761   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:43.193786   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:43.193798   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:43.193806   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:43.197438   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:43.198158   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:43.693694   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:43.693716   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:43.693724   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:43.693728   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:43.697267   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:44.193683   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:44.193704   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:44.193711   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:44.193715   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:44.197056   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:44.693897   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:44.693918   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:44.693928   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:44.693933   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:44.696914   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:45.193775   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:45.193795   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:45.193803   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:45.193807   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:45.197164   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:45.694421   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:45.694444   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:45.694455   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:45.694461   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:45.697506   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:45.698052   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:46.193428   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:46.193455   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:46.193486   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:46.193492   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:46.197151   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:46.693979   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:46.693997   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:46.694013   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:46.694017   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:46.697611   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:47.193578   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:47.193600   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:47.193607   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:47.193611   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:47.197105   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:47.693781   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:47.693802   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:47.693813   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:47.693817   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:47.696934   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:48.194335   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:48.194358   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:48.194365   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:48.194368   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:48.198434   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:13:48.199180   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:48.693737   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:48.693758   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:48.693768   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:48.693773   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:48.697344   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:49.193432   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:49.193451   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:49.193459   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:49.193463   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:49.196304   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:49.694364   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:49.694385   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:49.694396   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:49.694403   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:49.697486   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.193397   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:50.193418   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:50.193431   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:50.193435   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:50.197076   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.693831   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:50.693856   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:50.693867   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:50.693873   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:50.697369   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.698284   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:51.194258   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:51.194282   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:51.194289   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:51.194294   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:51.197449   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:51.694317   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:51.694339   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:51.694350   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:51.694356   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:51.698049   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:52.194018   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:52.194043   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:52.194052   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:52.194061   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:52.197494   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:52.694202   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:52.694224   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:52.694232   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:52.694236   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:52.697227   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:53.193702   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:53.193722   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:53.193729   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:53.193733   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:53.196923   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:53.197555   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:53.694135   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:53.694158   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:53.694166   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:53.694172   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:53.697390   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:54.193409   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:54.193427   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.193439   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.193443   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.195968   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.693832   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:54.693853   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.693861   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.693866   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.696718   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.697386   28654 node_ready.go:49] node "ha-199780-m03" has status "Ready":"True"
	I1009 19:13:54.697405   28654 node_ready.go:38] duration metric: took 17.504141075s for node "ha-199780-m03" to be "Ready" ...
	I1009 19:13:54.697413   28654 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:13:54.697463   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:13:54.697471   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.697479   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.697484   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.703461   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:13:54.710054   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.710118   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-r8lg7
	I1009 19:13:54.710126   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.710133   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.710136   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.712863   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.713585   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.713602   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.713609   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.713613   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.715857   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.716501   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.716519   28654 pod_ready.go:82] duration metric: took 6.443501ms for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.716529   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.716578   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v5k75
	I1009 19:13:54.716586   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.716593   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.716599   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.718834   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.719475   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.719490   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.719499   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.719505   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.721592   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.722022   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.722036   28654 pod_ready.go:82] duration metric: took 5.49901ms for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.722045   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.722092   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780
	I1009 19:13:54.722102   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.722111   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.722117   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.724132   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.724537   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.724549   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.724558   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.724564   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.726416   28654 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 19:13:54.726760   28654 pod_ready.go:93] pod "etcd-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.726774   28654 pod_ready.go:82] duration metric: took 4.721439ms for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.726783   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.726829   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m02
	I1009 19:13:54.726838   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.726847   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.726853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.728868   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.729481   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:54.729499   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.729510   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.729515   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.731574   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.732095   28654 pod_ready.go:93] pod "etcd-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.732112   28654 pod_ready.go:82] duration metric: took 5.322203ms for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.732123   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.894472   28654 request.go:632] Waited for 162.298544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m03
	I1009 19:13:54.894602   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m03
	I1009 19:13:54.894612   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.894619   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.894623   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.897741   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.094188   28654 request.go:632] Waited for 195.683908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:55.094240   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:55.094246   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.094253   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.094258   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.097407   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.098074   28654 pod_ready.go:93] pod "etcd-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.098090   28654 pod_ready.go:82] duration metric: took 365.959261ms for pod "etcd-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.098111   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.294211   28654 request.go:632] Waited for 196.026886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:13:55.294264   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:13:55.294270   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.294277   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.294281   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.297814   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.494347   28654 request.go:632] Waited for 195.288987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:55.494396   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:55.494400   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.494409   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.494414   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.497640   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.498264   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.498282   28654 pod_ready.go:82] duration metric: took 400.159789ms for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.498295   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.694371   28654 request.go:632] Waited for 196.007868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:13:55.694438   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:13:55.694444   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.694452   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.694457   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.697453   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:55.894821   28654 request.go:632] Waited for 196.365606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:55.894877   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:55.894894   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.894903   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.894908   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.898105   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.898641   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.898656   28654 pod_ready.go:82] duration metric: took 400.354565ms for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.898665   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.094875   28654 request.go:632] Waited for 196.142376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m03
	I1009 19:13:56.094943   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m03
	I1009 19:13:56.094953   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.094962   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.094969   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.098488   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.294812   28654 request.go:632] Waited for 195.339632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:56.294879   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:56.294886   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.294897   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.294905   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.298371   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.299243   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:56.299268   28654 pod_ready.go:82] duration metric: took 400.59742ms for pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.299278   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.494432   28654 request.go:632] Waited for 195.083743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:13:56.494487   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:13:56.494493   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.494503   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.494508   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.498203   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.694515   28654 request.go:632] Waited for 195.651266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:56.694569   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:56.694574   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.694582   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.694589   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.697903   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.698503   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:56.698524   28654 pod_ready.go:82] duration metric: took 399.235411ms for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.698534   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.894604   28654 request.go:632] Waited for 196.010295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:13:56.894690   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:13:56.894699   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.894709   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.894725   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.897698   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:57.094771   28654 request.go:632] Waited for 196.347164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:57.094830   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:57.094837   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.094846   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.094853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.097915   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.098466   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.098483   28654 pod_ready.go:82] duration metric: took 399.942607ms for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.098496   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.294694   28654 request.go:632] Waited for 196.107304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m03
	I1009 19:13:57.294760   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m03
	I1009 19:13:57.294768   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.294778   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.294791   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.298281   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.493859   28654 request.go:632] Waited for 194.862003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.493928   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.493933   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.493941   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.493945   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.497771   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.498530   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.498546   28654 pod_ready.go:82] duration metric: took 400.036948ms for pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.498556   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cltcd" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.694138   28654 request.go:632] Waited for 195.506846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cltcd
	I1009 19:13:57.694198   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cltcd
	I1009 19:13:57.694204   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.694211   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.694217   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.698240   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:13:57.894301   28654 request.go:632] Waited for 195.370676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.894370   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.894377   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.894391   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.894398   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.897846   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.898728   28654 pod_ready.go:93] pod "kube-proxy-cltcd" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.898745   28654 pod_ready.go:82] duration metric: took 400.184495ms for pod "kube-proxy-cltcd" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.898756   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.094244   28654 request.go:632] Waited for 195.417272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:13:58.094320   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:13:58.094332   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.094339   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.094343   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.098070   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.294156   28654 request.go:632] Waited for 195.371857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:58.294219   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:58.294226   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.294237   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.294245   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.297391   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.297856   28654 pod_ready.go:93] pod "kube-proxy-n8ffq" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:58.297872   28654 pod_ready.go:82] duration metric: took 399.106499ms for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.297884   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.493870   28654 request.go:632] Waited for 195.913549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:13:58.493922   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:13:58.493927   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.493937   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.493944   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.497117   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.694489   28654 request.go:632] Waited for 196.566825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:58.694545   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:58.694552   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.694563   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.694568   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.697679   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.698297   28654 pod_ready.go:93] pod "kube-proxy-zfsq8" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:58.698312   28654 pod_ready.go:82] duration metric: took 400.419475ms for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.698322   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.894499   28654 request.go:632] Waited for 196.088891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:13:58.894585   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:13:58.894592   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.894603   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.894613   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.897964   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.094228   28654 request.go:632] Waited for 195.366071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:59.094310   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:59.094322   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.094333   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.094342   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.097557   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.098186   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.098207   28654 pod_ready.go:82] duration metric: took 399.878488ms for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.098219   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.294278   28654 request.go:632] Waited for 195.983419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:13:59.294332   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:13:59.294338   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.294345   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.294350   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.297821   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.493975   28654 request.go:632] Waited for 195.208037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:59.494031   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:59.494036   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.494044   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.494049   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.501563   28654 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 19:13:59.502080   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.502097   28654 pod_ready.go:82] duration metric: took 403.868133ms for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.502106   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.694192   28654 request.go:632] Waited for 192.028751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m03
	I1009 19:13:59.694247   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m03
	I1009 19:13:59.694253   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.694260   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.694264   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.697180   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:59.894169   28654 request.go:632] Waited for 196.350026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:59.894218   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:59.894223   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.894230   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.894235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.897240   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:59.897806   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.897823   28654 pod_ready.go:82] duration metric: took 395.71123ms for pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.897835   28654 pod_ready.go:39] duration metric: took 5.200413633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:13:59.897849   28654 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:13:59.897900   28654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:59.914617   28654 api_server.go:72] duration metric: took 23.08591673s to wait for apiserver process to appear ...
	I1009 19:13:59.914639   28654 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:13:59.914655   28654 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1009 19:13:59.918628   28654 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1009 19:13:59.918679   28654 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I1009 19:13:59.918686   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.918696   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.918706   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.919571   28654 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1009 19:13:59.919687   28654 api_server.go:141] control plane version: v1.31.1
	I1009 19:13:59.919708   28654 api_server.go:131] duration metric: took 5.063855ms to wait for apiserver health ...
	I1009 19:13:59.919716   28654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:14:00.094827   28654 request.go:632] Waited for 175.023163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.094896   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.094904   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.094915   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.094925   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.100594   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:14:00.107658   28654 system_pods.go:59] 24 kube-system pods found
	I1009 19:14:00.107684   28654 system_pods.go:61] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:14:00.107689   28654 system_pods.go:61] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:14:00.107692   28654 system_pods.go:61] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:14:00.107695   28654 system_pods.go:61] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:14:00.107699   28654 system_pods.go:61] "etcd-ha-199780-m03" [b174991f-8f2c-44b1-8646-5ee7533a9a67] Running
	I1009 19:14:00.107702   28654 system_pods.go:61] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:14:00.107706   28654 system_pods.go:61] "kindnet-b8ff2" [6f00c0c0-aae5-4bfa-aa9f-30f7c79fa343] Running
	I1009 19:14:00.107711   28654 system_pods.go:61] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:14:00.107716   28654 system_pods.go:61] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:14:00.107721   28654 system_pods.go:61] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:14:00.107725   28654 system_pods.go:61] "kube-apiserver-ha-199780-m03" [9d8e77cd-db60-4329-91ea-012315e5beaf] Running
	I1009 19:14:00.107733   28654 system_pods.go:61] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:14:00.107738   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:14:00.107747   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m03" [36743602-a58a-4171-88c2-9a79af012f26] Running
	I1009 19:14:00.107754   28654 system_pods.go:61] "kube-proxy-cltcd" [1dad9ac4-00d4-497e-8d9b-3e20a5c35c10] Running
	I1009 19:14:00.107758   28654 system_pods.go:61] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:14:00.107765   28654 system_pods.go:61] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:14:00.107770   28654 system_pods.go:61] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:14:00.107777   28654 system_pods.go:61] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:14:00.107783   28654 system_pods.go:61] "kube-scheduler-ha-199780-m03" [b687bb2e-b6eb-4c17-9762-537dd28919ff] Running
	I1009 19:14:00.107790   28654 system_pods.go:61] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:14:00.107795   28654 system_pods.go:61] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:14:00.107802   28654 system_pods.go:61] "kube-vip-ha-199780-m03" [6fb0f84c-1cc8-4539-a879-4a8fe8bc3eb6] Running
	I1009 19:14:00.107808   28654 system_pods.go:61] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:14:00.107818   28654 system_pods.go:74] duration metric: took 188.095908ms to wait for pod list to return data ...
	I1009 19:14:00.107830   28654 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:14:00.294248   28654 request.go:632] Waited for 186.335259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:14:00.294301   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:14:00.294308   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.294318   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.294323   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.298434   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:14:00.298601   28654 default_sa.go:45] found service account: "default"
	I1009 19:14:00.298618   28654 default_sa.go:55] duration metric: took 190.779244ms for default service account to be created ...
	I1009 19:14:00.298632   28654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:14:00.493990   28654 request.go:632] Waited for 195.280768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.494052   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.494059   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.494069   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.494077   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.499571   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:14:00.506443   28654 system_pods.go:86] 24 kube-system pods found
	I1009 19:14:00.506469   28654 system_pods.go:89] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:14:00.506474   28654 system_pods.go:89] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:14:00.506478   28654 system_pods.go:89] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:14:00.506482   28654 system_pods.go:89] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:14:00.506486   28654 system_pods.go:89] "etcd-ha-199780-m03" [b174991f-8f2c-44b1-8646-5ee7533a9a67] Running
	I1009 19:14:00.506490   28654 system_pods.go:89] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:14:00.506495   28654 system_pods.go:89] "kindnet-b8ff2" [6f00c0c0-aae5-4bfa-aa9f-30f7c79fa343] Running
	I1009 19:14:00.506503   28654 system_pods.go:89] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:14:00.506511   28654 system_pods.go:89] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:14:00.506518   28654 system_pods.go:89] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:14:00.506527   28654 system_pods.go:89] "kube-apiserver-ha-199780-m03" [9d8e77cd-db60-4329-91ea-012315e5beaf] Running
	I1009 19:14:00.506539   28654 system_pods.go:89] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:14:00.506548   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:14:00.506555   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m03" [36743602-a58a-4171-88c2-9a79af012f26] Running
	I1009 19:14:00.506558   28654 system_pods.go:89] "kube-proxy-cltcd" [1dad9ac4-00d4-497e-8d9b-3e20a5c35c10] Running
	I1009 19:14:00.506564   28654 system_pods.go:89] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:14:00.506569   28654 system_pods.go:89] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:14:00.506574   28654 system_pods.go:89] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:14:00.506580   28654 system_pods.go:89] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:14:00.506585   28654 system_pods.go:89] "kube-scheduler-ha-199780-m03" [b687bb2e-b6eb-4c17-9762-537dd28919ff] Running
	I1009 19:14:00.506590   28654 system_pods.go:89] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:14:00.506598   28654 system_pods.go:89] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:14:00.506602   28654 system_pods.go:89] "kube-vip-ha-199780-m03" [6fb0f84c-1cc8-4539-a879-4a8fe8bc3eb6] Running
	I1009 19:14:00.506610   28654 system_pods.go:89] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:14:00.506619   28654 system_pods.go:126] duration metric: took 207.977758ms to wait for k8s-apps to be running ...
	I1009 19:14:00.506632   28654 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:14:00.506681   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:14:00.521903   28654 system_svc.go:56] duration metric: took 15.266021ms WaitForService to wait for kubelet
	I1009 19:14:00.521926   28654 kubeadm.go:582] duration metric: took 23.693227633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:14:00.521941   28654 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:14:00.694326   28654 request.go:632] Waited for 172.306887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I1009 19:14:00.694392   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I1009 19:14:00.694398   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.694405   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.694409   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.698331   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:14:00.699548   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699566   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699577   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699581   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699584   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699587   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699591   28654 node_conditions.go:105] duration metric: took 177.645761ms to run NodePressure ...
	I1009 19:14:00.699601   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:14:00.699621   28654 start.go:255] writing updated cluster config ...
	I1009 19:14:00.699890   28654 ssh_runner.go:195] Run: rm -f paused
	I1009 19:14:00.750344   28654 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 19:14:00.752481   28654 out.go:177] * Done! kubectl is now configured to use "ha-199780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.615579005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501459615556436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6dcccc5e-45e1-4104-9388-25af05fef173 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.615994254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d631e9ed-d513-4491-968f-8242cbf2f1cc name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.616060556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d631e9ed-d513-4491-968f-8242cbf2f1cc name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.616388991Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d631e9ed-d513-4491-968f-8242cbf2f1cc name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.658269351Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3bbeff12-d12d-4d3f-9d3f-81fde0f577d8 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.658356087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3bbeff12-d12d-4d3f-9d3f-81fde0f577d8 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.659364625Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=510f2fde-88a5-4efa-9624-55217e1523b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.659903743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501459659879419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=510f2fde-88a5-4efa-9624-55217e1523b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.660496675Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5be20270-20c9-4416-b7f3-81ec21605c66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.660568881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5be20270-20c9-4416-b7f3-81ec21605c66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.660802828Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5be20270-20c9-4416-b7f3-81ec21605c66 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.698492344Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9ae18ee-0127-4151-a009-f8383e637369 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.698613843Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9ae18ee-0127-4151-a009-f8383e637369 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.699838012Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=276202a5-8f42-4da9-ab46-a136523d0db8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.700269290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501459700247707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=276202a5-8f42-4da9-ab46-a136523d0db8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.700770892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22bb513b-5122-46da-b1c0-338daaae5d2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.700840571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22bb513b-5122-46da-b1c0-338daaae5d2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.701136851Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22bb513b-5122-46da-b1c0-338daaae5d2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.738910588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bc83ced-a24b-48ce-b2cd-49d7d35e2c62 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.738998545Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bc83ced-a24b-48ce-b2cd-49d7d35e2c62 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.739995115Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e856c91-147e-4ada-92b4-20dfb4a49186 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.740484412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501459740463666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e856c91-147e-4ada-92b4-20dfb4a49186 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.741137098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6551d5e-2401-4075-90f2-2d961adac3d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.741242156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6551d5e-2401-4075-90f2-2d961adac3d4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:39 ha-199780 crio[667]: time="2024-10-09 19:17:39.741590019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6551d5e-2401-4075-90f2-2d961adac3d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ea2f43f1a79f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4ee23da4cac60       busybox-7dff88458-9j59h
	22a50af75d092       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   085e585069bd9       coredns-7c65d6cfc9-r8lg7
	35a77197ba833       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   31a68dbf07563       coredns-7c65d6cfc9-v5k75
	ec6c52f12ef1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   fe10d9898f15c       storage-provisioner
	aa6f941b511ee       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   574f1065ffc92       kindnet-2gjpk
	e72e7a03ebf12       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   893da030028ba       kube-proxy-n8ffq
	5e66ef287f9b9       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   f43a5a99f755d       kube-vip-ha-199780
	297d9ba8730bd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   1c04b2a2ff60e       kube-apiserver-ha-199780
	88b0c31651177       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   7304e21bfd538       kube-controller-manager-ha-199780
	ce5525ec371c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   a31ef18f5a475       etcd-ha-199780
	02b6fe12544b4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4e472f9c0008c       kube-scheduler-ha-199780
	
	
	==> coredns [22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431] <==
	[INFO] 10.244.2.2:60800 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001355455s
	[INFO] 10.244.2.2:51592 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001524757s
	[INFO] 10.244.0.4:56643 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000117626s
	[INFO] 10.244.0.4:59083 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001918015s
	[INFO] 10.244.1.2:50050 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020734s
	[INFO] 10.244.1.2:42588 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154546s
	[INFO] 10.244.2.2:53843 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710102s
	[INFO] 10.244.2.2:41845 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146416s
	[INFO] 10.244.2.2:36609 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000234089s
	[INFO] 10.244.0.4:46267 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770158s
	[INFO] 10.244.0.4:50439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087554s
	[INFO] 10.244.0.4:34970 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127814s
	[INFO] 10.244.0.4:56896 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001173975s
	[INFO] 10.244.0.4:49966 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151676s
	[INFO] 10.244.1.2:42996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014083s
	[INFO] 10.244.1.2:44506 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088434s
	[INFO] 10.244.1.2:49086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070298s
	[INFO] 10.244.2.2:50808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197102s
	[INFO] 10.244.0.4:46671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019106s
	[INFO] 10.244.0.4:55369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070793s
	[INFO] 10.244.1.2:55579 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00053279s
	[INFO] 10.244.1.2:48281 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017096s
	[INFO] 10.244.2.2:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179419s
	[INFO] 10.244.2.2:37087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001697s
	[INFO] 10.244.0.4:45764 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105979s
	
	
	==> coredns [35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72] <==
	[INFO] 10.244.1.2:49567 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017247s
	[INFO] 10.244.1.2:46716 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012636722s
	[INFO] 10.244.1.2:55598 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179363s
	[INFO] 10.244.1.2:47319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137976s
	[INFO] 10.244.2.2:41489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184478s
	[INFO] 10.244.2.2:55951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000222614s
	[INFO] 10.244.2.2:48627 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015294s
	[INFO] 10.244.2.2:39644 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0012309s
	[INFO] 10.244.2.2:40477 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089525s
	[INFO] 10.244.0.4:43949 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131355s
	[INFO] 10.244.0.4:36372 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136676s
	[INFO] 10.244.0.4:46637 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067852s
	[INFO] 10.244.1.2:51170 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178464s
	[INFO] 10.244.2.2:34724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178092s
	[INFO] 10.244.2.2:51704 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113596s
	[INFO] 10.244.2.2:58856 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114468s
	[INFO] 10.244.0.4:46411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103548s
	[INFO] 10.244.0.4:56515 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097616s
	[INFO] 10.244.1.2:46439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144476s
	[INFO] 10.244.1.2:55946 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169556s
	[INFO] 10.244.2.2:59005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136307s
	[INFO] 10.244.2.2:36778 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074325s
	[INFO] 10.244.0.4:35520 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000216466s
	[INFO] 10.244.0.4:37146 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092067s
	[INFO] 10.244.0.4:38648 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006473s
	
	
	==> describe nodes <==
	Name:               ha-199780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T19_11_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:11:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    ha-199780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8b350a04d4e4876ae4d16443fff45f4
	  System UUID:                f8b350a0-4d4e-4876-ae4d-16443fff45f4
	  Boot ID:                    933ad8fe-c793-4abe-b675-8fc9d8bb0df7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9j59h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-7c65d6cfc9-r8lg7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 coredns-7c65d6cfc9-v5k75             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 etcd-ha-199780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-2gjpk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m13s
	  kube-system                 kube-apiserver-ha-199780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-ha-199780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-n8ffq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-ha-199780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-vip-ha-199780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m11s  kube-proxy       
	  Normal  Starting                 6m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m17s  kubelet          Node ha-199780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s  kubelet          Node ha-199780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s  kubelet          Node ha-199780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	  Normal  NodeReady                5m55s  kubelet          Node ha-199780 status is now: NodeReady
	  Normal  RegisteredNode           5m15s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	  Normal  RegisteredNode           3m58s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	
	
	Name:               ha-199780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_12_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:12:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:15:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    ha-199780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d9c79bf2f124101a095ed4ba0ce88eb
	  System UUID:                8d9c79bf-2f12-4101-a095-ed4ba0ce88eb
	  Boot ID:                    5dd46771-2617-4b89-b6af-8b5fb9f8968b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6v84n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-199780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m21s
	  kube-system                 kindnet-pwr8x                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m23s
	  kube-system                 kube-apiserver-ha-199780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-ha-199780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-proxy-zfsq8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-scheduler-ha-199780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-vip-ha-199780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  Starting                 5m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m23s (x8 over 5m23s)  kubelet          Node ha-199780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s (x8 over 5m23s)  kubelet          Node ha-199780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x7 over 5m23s)  kubelet          Node ha-199780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  RegisteredNode           3m58s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-199780-m02 status is now: NodeNotReady
	
	
	Name:               ha-199780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_13_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-199780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eebc1909fc264048999cb603a9af6ce3
	  System UUID:                eebc1909-fc26-4048-999c-b603a9af6ce3
	  Boot ID:                    b15e1b77-82c5-4af5-a3d4-20b2860c5033
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8946j                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-199780-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m5s
	  kube-system                 kindnet-b8ff2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m7s
	  kube-system                 kube-apiserver-ha-199780-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-controller-manager-ha-199780-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-proxy-cltcd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-ha-199780-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-vip-ha-199780-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m7s (x8 over 4m7s)  kubelet          Node ha-199780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x8 over 4m7s)  kubelet          Node ha-199780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x7 over 4m7s)  kubelet          Node ha-199780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	
	
	Name:               ha-199780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_14_39_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.124
	  Hostname:    ha-199780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 781e482090944bd998625225909c9e80
	  System UUID:                781e4820-9094-4bd9-9862-5225909c9e80
	  Boot ID:                    12a0f26b-3a10-4a3c-a52b-9cbc57a77f21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24ftv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m1s
	  kube-system                 kube-proxy-m4z2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m2s)  kubelet          Node ha-199780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m2s)  kubelet          Node ha-199780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m2s)  kubelet          Node ha-199780-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m                   node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-199780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050153] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040118] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.479681] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588103] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 9 19:11] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.067225] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062889] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.160511] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.147234] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.288221] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.950259] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +4.382176] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.060168] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.347615] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.082493] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.436773] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.719462] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 9 19:12] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef] <==
	{"level":"warn","ts":"2024-10-09T19:17:39.747687Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:39.823559Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.024343Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.035983Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.045909Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.052949Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.056650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.060319Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.066910Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.073782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.080950Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.084482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.088041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.093966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.100264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.106633Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.111591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.112123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.114328Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.115494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.119838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.123775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.125896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.132870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:40.176624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:17:40 up 6 min,  0 users,  load average: 0.23, 0.33, 0.18
	Linux ha-199780 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff] <==
	I1009 19:17:05.106765       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:15.106622       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:15.106737       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:15.107040       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:15.107096       1 main.go:300] handling current node
	I1009 19:17:15.107127       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:15.107145       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:15.107363       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:15.107515       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:25.107513       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:25.107568       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:25.107889       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:25.107926       1 main.go:300] handling current node
	I1009 19:17:25.107945       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:25.107952       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:25.108091       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:25.108116       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:35.098534       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:35.098583       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:35.098861       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:35.098893       1 main.go:300] handling current node
	I1009 19:17:35.098905       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:35.098910       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:35.099056       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:35.099076       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d] <==
	I1009 19:11:21.668889       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:11:21.770460       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:11:21.781866       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.114]
	I1009 19:11:21.782961       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 19:11:21.787948       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:11:22.068030       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 19:11:22.927751       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 19:11:22.944470       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:11:23.089040       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 19:11:27.267149       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1009 19:11:27.777277       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1009 19:14:07.172312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48556: use of closed network connection
	E1009 19:14:07.353387       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48566: use of closed network connection
	E1009 19:14:07.545234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48574: use of closed network connection
	E1009 19:14:07.734543       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48582: use of closed network connection
	E1009 19:14:07.929888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48590: use of closed network connection
	E1009 19:14:08.100628       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48610: use of closed network connection
	E1009 19:14:08.280738       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48618: use of closed network connection
	E1009 19:14:08.453709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48636: use of closed network connection
	E1009 19:14:08.625372       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48648: use of closed network connection
	E1009 19:14:08.913070       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48688: use of closed network connection
	E1009 19:14:09.077842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48702: use of closed network connection
	E1009 19:14:09.252280       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48730: use of closed network connection
	E1009 19:14:09.427983       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48748: use of closed network connection
	E1009 19:14:09.597172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48774: use of closed network connection
	
	
	==> kube-controller-manager [88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf] <==
	I1009 19:14:39.219907       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-199780-m04" podCIDRs=["10.244.3.0/24"]
	I1009 19:14:39.220731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.221061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.241490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.355995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.770947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:40.508613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:42.009348       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:42.009820       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-199780-m04"
	I1009 19:14:42.092487       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:43.021323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:43.490581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:49.589213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:59.213778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:59.213909       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-199780-m04"
	I1009 19:14:59.228331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:00.446970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:10.142919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:52.044073       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-199780-m04"
	I1009 19:15:52.044690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:52.073336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:52.197476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.479755ms"
	I1009 19:15:52.197580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.944µs"
	I1009 19:15:53.092490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:57.298894       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	
	
	==> kube-proxy [e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 19:11:28.707293       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 19:11:28.725677       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	E1009 19:11:28.725782       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:11:28.757070       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 19:11:28.757115       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:11:28.757143       1 server_linux.go:169] "Using iptables Proxier"
	I1009 19:11:28.759907       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:11:28.760502       1 server.go:483] "Version info" version="v1.31.1"
	I1009 19:11:28.760531       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:11:28.763071       1 config.go:199] "Starting service config controller"
	I1009 19:11:28.763270       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 19:11:28.763554       1 config.go:105] "Starting endpoint slice config controller"
	I1009 19:11:28.763583       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 19:11:28.764395       1 config.go:328] "Starting node config controller"
	I1009 19:11:28.764485       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 19:11:28.864003       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 19:11:28.864032       1 shared_informer.go:320] Caches are synced for service config
	I1009 19:11:28.864635       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f] <==
	W1009 19:11:21.020523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.020653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.034179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.034272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.151254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 19:11:21.151392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.213273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 19:11:21.213327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.215782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:11:21.217186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.224009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 19:11:21.224287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.233925       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 19:11:21.234510       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 19:11:21.254121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.254998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1009 19:11:24.360718       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 19:14:39.271772       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-v6wc7\": pod kube-proxy-v6wc7 is already assigned to node \"ha-199780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-v6wc7" node="ha-199780-m04"
	E1009 19:14:39.274796       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d0c6f382-7a34-4281-922e-ded9d878bec1(kube-system/kube-proxy-v6wc7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-v6wc7"
	E1009 19:14:39.274892       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v6wc7\": pod kube-proxy-v6wc7 is already assigned to node \"ha-199780-m04\"" pod="kube-system/kube-proxy-v6wc7"
	I1009 19:14:39.274974       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v6wc7" node="ha-199780-m04"
	E1009 19:14:39.274639       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24ftv\": pod kindnet-24ftv is already assigned to node \"ha-199780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-24ftv" node="ha-199780-m04"
	E1009 19:14:39.277781       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 67dc91f7-39c8-4a82-843c-629f28c633ce(kube-system/kindnet-24ftv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24ftv"
	E1009 19:14:39.277909       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24ftv\": pod kindnet-24ftv is already assigned to node \"ha-199780-m04\"" pod="kube-system/kindnet-24ftv"
	I1009 19:14:39.278018       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24ftv" node="ha-199780-m04"
	
	
	==> kubelet <==
	Oct 09 19:16:23 ha-199780 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 19:16:23 ha-199780 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 19:16:23 ha-199780 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:16:23 ha-199780 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:16:23 ha-199780 kubelet[1323]: E1009 19:16:23.169875    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501383169575669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:23 ha-199780 kubelet[1323]: E1009 19:16:23.169902    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501383169575669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:33 ha-199780 kubelet[1323]: E1009 19:16:33.171614    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501393171084690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:33 ha-199780 kubelet[1323]: E1009 19:16:33.171869    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501393171084690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:43 ha-199780 kubelet[1323]: E1009 19:16:43.174108    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501403173783019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:43 ha-199780 kubelet[1323]: E1009 19:16:43.174391    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501403173783019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:53 ha-199780 kubelet[1323]: E1009 19:16:53.177556    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501413177111466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:53 ha-199780 kubelet[1323]: E1009 19:16:53.177590    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501413177111466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:03 ha-199780 kubelet[1323]: E1009 19:17:03.179697    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501423179388594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:03 ha-199780 kubelet[1323]: E1009 19:17:03.179743    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501423179388594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:13 ha-199780 kubelet[1323]: E1009 19:17:13.181290    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501433180839998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:13 ha-199780 kubelet[1323]: E1009 19:17:13.181685    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501433180839998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.046503    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:17:23 ha-199780 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.183478    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501443183171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.183519    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501443183171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:33 ha-199780 kubelet[1323]: E1009 19:17:33.185325    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501453184930973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:33 ha-199780 kubelet[1323]: E1009 19:17:33.186043    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501453184930973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-199780 -n ha-199780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-199780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.388154897s)
ha_test.go:415: expected profile "ha-199780" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-199780\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-199780\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-199780\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.114\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.83\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.84\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.124\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"
metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\"
:262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-199780 -n ha-199780
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 logs -n 25: (1.394250334s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m03_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m04 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp testdata/cp-test.txt                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m04_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03:/home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m03 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-199780 node stop m02 -v=7                                                     | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:10:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:10:42.430511   28654 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:10:42.430648   28654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:42.430657   28654 out.go:358] Setting ErrFile to fd 2...
	I1009 19:10:42.430662   28654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:42.430823   28654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:10:42.431377   28654 out.go:352] Setting JSON to false
	I1009 19:10:42.432258   28654 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3183,"bootTime":1728497859,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:10:42.432357   28654 start.go:139] virtualization: kvm guest
	I1009 19:10:42.434444   28654 out.go:177] * [ha-199780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:10:42.435720   28654 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:10:42.435744   28654 notify.go:220] Checking for updates...
	I1009 19:10:42.438470   28654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:10:42.439771   28654 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:10:42.441201   28654 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:42.442550   28654 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:10:42.443839   28654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:10:42.445321   28654 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:10:42.478513   28654 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 19:10:42.479828   28654 start.go:297] selected driver: kvm2
	I1009 19:10:42.479841   28654 start.go:901] validating driver "kvm2" against <nil>
	I1009 19:10:42.479851   28654 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:10:42.480537   28654 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:42.480609   28654 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:10:42.494762   28654 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 19:10:42.494798   28654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 19:10:42.495015   28654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:10:42.495042   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:10:42.495103   28654 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:10:42.495115   28654 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:10:42.495160   28654 start.go:340] cluster config:
	{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:42.495268   28654 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:42.497127   28654 out.go:177] * Starting "ha-199780" primary control-plane node in "ha-199780" cluster
	I1009 19:10:42.498350   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:10:42.498375   28654 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:10:42.498383   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:10:42.498461   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:10:42.498474   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:10:42.498736   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:10:42.498755   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json: {Name:mkaa9f981fdc58b4cf67de89e14727a24139b9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:10:42.498888   28654 start.go:360] acquireMachinesLock for ha-199780: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:10:42.498923   28654 start.go:364] duration metric: took 18.652µs to acquireMachinesLock for "ha-199780"
	I1009 19:10:42.498944   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:10:42.499008   28654 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 19:10:42.500613   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:10:42.500730   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:42.500770   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:42.514603   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I1009 19:10:42.515116   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:42.515617   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:10:42.515660   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:42.515950   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:42.516152   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:10:42.516283   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:10:42.516418   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:10:42.516447   28654 client.go:168] LocalClient.Create starting
	I1009 19:10:42.516482   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:10:42.516515   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:10:42.516531   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:10:42.516577   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:10:42.516599   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:10:42.516612   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:10:42.516640   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:10:42.516651   28654 main.go:141] libmachine: (ha-199780) Calling .PreCreateCheck
	I1009 19:10:42.516980   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:10:42.517335   28654 main.go:141] libmachine: Creating machine...
	I1009 19:10:42.517347   28654 main.go:141] libmachine: (ha-199780) Calling .Create
	I1009 19:10:42.517467   28654 main.go:141] libmachine: (ha-199780) Creating KVM machine...
	I1009 19:10:42.518611   28654 main.go:141] libmachine: (ha-199780) DBG | found existing default KVM network
	I1009 19:10:42.519307   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.519165   28677 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1009 19:10:42.519338   28654 main.go:141] libmachine: (ha-199780) DBG | created network xml: 
	I1009 19:10:42.519353   28654 main.go:141] libmachine: (ha-199780) DBG | <network>
	I1009 19:10:42.519365   28654 main.go:141] libmachine: (ha-199780) DBG |   <name>mk-ha-199780</name>
	I1009 19:10:42.519373   28654 main.go:141] libmachine: (ha-199780) DBG |   <dns enable='no'/>
	I1009 19:10:42.519380   28654 main.go:141] libmachine: (ha-199780) DBG |   
	I1009 19:10:42.519389   28654 main.go:141] libmachine: (ha-199780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 19:10:42.519398   28654 main.go:141] libmachine: (ha-199780) DBG |     <dhcp>
	I1009 19:10:42.519408   28654 main.go:141] libmachine: (ha-199780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 19:10:42.519416   28654 main.go:141] libmachine: (ha-199780) DBG |     </dhcp>
	I1009 19:10:42.519425   28654 main.go:141] libmachine: (ha-199780) DBG |   </ip>
	I1009 19:10:42.519432   28654 main.go:141] libmachine: (ha-199780) DBG |   
	I1009 19:10:42.519439   28654 main.go:141] libmachine: (ha-199780) DBG | </network>
	I1009 19:10:42.519448   28654 main.go:141] libmachine: (ha-199780) DBG | 
	I1009 19:10:42.523998   28654 main.go:141] libmachine: (ha-199780) DBG | trying to create private KVM network mk-ha-199780 192.168.39.0/24...
	I1009 19:10:42.584957   28654 main.go:141] libmachine: (ha-199780) DBG | private KVM network mk-ha-199780 192.168.39.0/24 created
	I1009 19:10:42.584984   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.584941   28677 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:42.584995   28654 main.go:141] libmachine: (ha-199780) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 ...
	I1009 19:10:42.585010   28654 main.go:141] libmachine: (ha-199780) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:10:42.585155   28654 main.go:141] libmachine: (ha-199780) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:10:42.845983   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.845854   28677 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa...
	I1009 19:10:43.100187   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:43.100062   28677 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/ha-199780.rawdisk...
	I1009 19:10:43.100216   28654 main.go:141] libmachine: (ha-199780) DBG | Writing magic tar header
	I1009 19:10:43.100229   28654 main.go:141] libmachine: (ha-199780) DBG | Writing SSH key tar header
	I1009 19:10:43.100242   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:43.100204   28677 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 ...
	I1009 19:10:43.100332   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780
	I1009 19:10:43.100355   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 (perms=drwx------)
	I1009 19:10:43.100365   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:10:43.100376   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:10:43.100386   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:43.100399   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:10:43.100406   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:10:43.100424   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:10:43.100435   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home
	I1009 19:10:43.100443   28654 main.go:141] libmachine: (ha-199780) DBG | Skipping /home - not owner
	I1009 19:10:43.100455   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:10:43.100467   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:10:43.100476   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:10:43.100483   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:10:43.100487   28654 main.go:141] libmachine: (ha-199780) Creating domain...
	I1009 19:10:43.101601   28654 main.go:141] libmachine: (ha-199780) define libvirt domain using xml: 
	I1009 19:10:43.101609   28654 main.go:141] libmachine: (ha-199780) <domain type='kvm'>
	I1009 19:10:43.101614   28654 main.go:141] libmachine: (ha-199780)   <name>ha-199780</name>
	I1009 19:10:43.101624   28654 main.go:141] libmachine: (ha-199780)   <memory unit='MiB'>2200</memory>
	I1009 19:10:43.101632   28654 main.go:141] libmachine: (ha-199780)   <vcpu>2</vcpu>
	I1009 19:10:43.101638   28654 main.go:141] libmachine: (ha-199780)   <features>
	I1009 19:10:43.101646   28654 main.go:141] libmachine: (ha-199780)     <acpi/>
	I1009 19:10:43.101656   28654 main.go:141] libmachine: (ha-199780)     <apic/>
	I1009 19:10:43.101664   28654 main.go:141] libmachine: (ha-199780)     <pae/>
	I1009 19:10:43.101673   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.101686   28654 main.go:141] libmachine: (ha-199780)   </features>
	I1009 19:10:43.101695   28654 main.go:141] libmachine: (ha-199780)   <cpu mode='host-passthrough'>
	I1009 19:10:43.101702   28654 main.go:141] libmachine: (ha-199780)   
	I1009 19:10:43.101711   28654 main.go:141] libmachine: (ha-199780)   </cpu>
	I1009 19:10:43.101752   28654 main.go:141] libmachine: (ha-199780)   <os>
	I1009 19:10:43.101769   28654 main.go:141] libmachine: (ha-199780)     <type>hvm</type>
	I1009 19:10:43.101776   28654 main.go:141] libmachine: (ha-199780)     <boot dev='cdrom'/>
	I1009 19:10:43.101783   28654 main.go:141] libmachine: (ha-199780)     <boot dev='hd'/>
	I1009 19:10:43.101819   28654 main.go:141] libmachine: (ha-199780)     <bootmenu enable='no'/>
	I1009 19:10:43.101840   28654 main.go:141] libmachine: (ha-199780)   </os>
	I1009 19:10:43.101848   28654 main.go:141] libmachine: (ha-199780)   <devices>
	I1009 19:10:43.101855   28654 main.go:141] libmachine: (ha-199780)     <disk type='file' device='cdrom'>
	I1009 19:10:43.101864   28654 main.go:141] libmachine: (ha-199780)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/boot2docker.iso'/>
	I1009 19:10:43.101869   28654 main.go:141] libmachine: (ha-199780)       <target dev='hdc' bus='scsi'/>
	I1009 19:10:43.101877   28654 main.go:141] libmachine: (ha-199780)       <readonly/>
	I1009 19:10:43.101881   28654 main.go:141] libmachine: (ha-199780)     </disk>
	I1009 19:10:43.101887   28654 main.go:141] libmachine: (ha-199780)     <disk type='file' device='disk'>
	I1009 19:10:43.101894   28654 main.go:141] libmachine: (ha-199780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:10:43.101901   28654 main.go:141] libmachine: (ha-199780)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/ha-199780.rawdisk'/>
	I1009 19:10:43.101908   28654 main.go:141] libmachine: (ha-199780)       <target dev='hda' bus='virtio'/>
	I1009 19:10:43.101913   28654 main.go:141] libmachine: (ha-199780)     </disk>
	I1009 19:10:43.101919   28654 main.go:141] libmachine: (ha-199780)     <interface type='network'>
	I1009 19:10:43.101933   28654 main.go:141] libmachine: (ha-199780)       <source network='mk-ha-199780'/>
	I1009 19:10:43.101946   28654 main.go:141] libmachine: (ha-199780)       <model type='virtio'/>
	I1009 19:10:43.101959   28654 main.go:141] libmachine: (ha-199780)     </interface>
	I1009 19:10:43.101969   28654 main.go:141] libmachine: (ha-199780)     <interface type='network'>
	I1009 19:10:43.101978   28654 main.go:141] libmachine: (ha-199780)       <source network='default'/>
	I1009 19:10:43.101987   28654 main.go:141] libmachine: (ha-199780)       <model type='virtio'/>
	I1009 19:10:43.101995   28654 main.go:141] libmachine: (ha-199780)     </interface>
	I1009 19:10:43.102004   28654 main.go:141] libmachine: (ha-199780)     <serial type='pty'>
	I1009 19:10:43.102012   28654 main.go:141] libmachine: (ha-199780)       <target port='0'/>
	I1009 19:10:43.102025   28654 main.go:141] libmachine: (ha-199780)     </serial>
	I1009 19:10:43.102042   28654 main.go:141] libmachine: (ha-199780)     <console type='pty'>
	I1009 19:10:43.102058   28654 main.go:141] libmachine: (ha-199780)       <target type='serial' port='0'/>
	I1009 19:10:43.102072   28654 main.go:141] libmachine: (ha-199780)     </console>
	I1009 19:10:43.102081   28654 main.go:141] libmachine: (ha-199780)     <rng model='virtio'>
	I1009 19:10:43.102095   28654 main.go:141] libmachine: (ha-199780)       <backend model='random'>/dev/random</backend>
	I1009 19:10:43.102102   28654 main.go:141] libmachine: (ha-199780)     </rng>
	I1009 19:10:43.102106   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.102114   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.102124   28654 main.go:141] libmachine: (ha-199780)   </devices>
	I1009 19:10:43.102131   28654 main.go:141] libmachine: (ha-199780) </domain>
	I1009 19:10:43.102144   28654 main.go:141] libmachine: (ha-199780) 
	I1009 19:10:43.106174   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:62:13:83 in network default
	I1009 19:10:43.106715   28654 main.go:141] libmachine: (ha-199780) Ensuring networks are active...
	I1009 19:10:43.106743   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:43.107417   28654 main.go:141] libmachine: (ha-199780) Ensuring network default is active
	I1009 19:10:43.107748   28654 main.go:141] libmachine: (ha-199780) Ensuring network mk-ha-199780 is active
	I1009 19:10:43.108262   28654 main.go:141] libmachine: (ha-199780) Getting domain xml...
	I1009 19:10:43.109003   28654 main.go:141] libmachine: (ha-199780) Creating domain...
	I1009 19:10:44.275323   28654 main.go:141] libmachine: (ha-199780) Waiting to get IP...
	I1009 19:10:44.276021   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.276397   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.276440   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.276393   28677 retry.go:31] will retry after 234.976528ms: waiting for machine to come up
	I1009 19:10:44.512805   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.513239   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.513266   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.513207   28677 retry.go:31] will retry after 293.441421ms: waiting for machine to come up
	I1009 19:10:44.808637   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.809099   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.809119   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.809062   28677 retry.go:31] will retry after 303.641198ms: waiting for machine to come up
	I1009 19:10:45.114382   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:45.114813   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:45.114842   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:45.114772   28677 retry.go:31] will retry after 536.014176ms: waiting for machine to come up
	I1009 19:10:45.652428   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:45.652792   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:45.652818   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:45.652745   28677 retry.go:31] will retry after 705.110787ms: waiting for machine to come up
	I1009 19:10:46.359497   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:46.360044   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:46.360101   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:46.360017   28677 retry.go:31] will retry after 647.020654ms: waiting for machine to come up
	I1009 19:10:47.008863   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:47.009323   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:47.009364   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:47.009282   28677 retry.go:31] will retry after 1.0294982s: waiting for machine to come up
	I1009 19:10:48.039832   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:48.040304   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:48.040326   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:48.040267   28677 retry.go:31] will retry after 1.106767931s: waiting for machine to come up
	I1009 19:10:49.148646   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:49.149054   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:49.149076   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:49.149026   28677 retry.go:31] will retry after 1.376949133s: waiting for machine to come up
	I1009 19:10:50.527437   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:50.527855   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:50.527877   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:50.527806   28677 retry.go:31] will retry after 1.480550438s: waiting for machine to come up
	I1009 19:10:52.009673   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:52.010195   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:52.010224   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:52.010161   28677 retry.go:31] will retry after 2.407652517s: waiting for machine to come up
	I1009 19:10:54.420236   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:54.420627   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:54.420661   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:54.420596   28677 retry.go:31] will retry after 3.410708317s: waiting for machine to come up
	I1009 19:10:57.833396   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:57.833828   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:57.833855   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:57.833781   28677 retry.go:31] will retry after 3.08007179s: waiting for machine to come up
	I1009 19:11:00.918052   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:00.918375   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:11:00.918394   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:11:00.918349   28677 retry.go:31] will retry after 3.66383863s: waiting for machine to come up
	I1009 19:11:04.584755   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.585113   28654 main.go:141] libmachine: (ha-199780) Found IP for machine: 192.168.39.114
	I1009 19:11:04.585143   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has current primary IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.585150   28654 main.go:141] libmachine: (ha-199780) Reserving static IP address...
	I1009 19:11:04.585468   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find host DHCP lease matching {name: "ha-199780", mac: "52:54:00:5a:16:82", ip: "192.168.39.114"} in network mk-ha-199780
	I1009 19:11:04.653177   28654 main.go:141] libmachine: (ha-199780) DBG | Getting to WaitForSSH function...
	I1009 19:11:04.653210   28654 main.go:141] libmachine: (ha-199780) Reserved static IP address: 192.168.39.114
	I1009 19:11:04.653224   28654 main.go:141] libmachine: (ha-199780) Waiting for SSH to be available...
	I1009 19:11:04.655641   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.655950   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.655974   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.656128   28654 main.go:141] libmachine: (ha-199780) DBG | Using SSH client type: external
	I1009 19:11:04.656155   28654 main.go:141] libmachine: (ha-199780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa (-rw-------)
	I1009 19:11:04.656182   28654 main.go:141] libmachine: (ha-199780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:11:04.656192   28654 main.go:141] libmachine: (ha-199780) DBG | About to run SSH command:
	I1009 19:11:04.656207   28654 main.go:141] libmachine: (ha-199780) DBG | exit 0
	I1009 19:11:04.778875   28654 main.go:141] libmachine: (ha-199780) DBG | SSH cmd err, output: <nil>: 
	I1009 19:11:04.779170   28654 main.go:141] libmachine: (ha-199780) KVM machine creation complete!
	I1009 19:11:04.779478   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:11:04.780010   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:04.780176   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:04.780315   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:11:04.780331   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:04.781523   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:11:04.781541   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:11:04.781546   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:11:04.781551   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.783979   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.784330   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.784354   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.784520   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.784676   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.784815   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.784920   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.785023   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.785198   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.785208   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:11:04.886621   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:04.886642   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:11:04.886652   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.889117   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.889470   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.889489   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.889658   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.889825   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.889979   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.890105   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.890280   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.890429   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.890439   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:11:04.991626   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:11:04.991752   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:11:04.991763   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:11:04.991772   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:04.991975   28654 buildroot.go:166] provisioning hostname "ha-199780"
	I1009 19:11:04.991994   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:04.992147   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.994446   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.994806   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.994831   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.994954   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.995140   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.995287   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.995424   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.995557   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.995745   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.995756   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780 && echo "ha-199780" | sudo tee /etc/hostname
	I1009 19:11:05.113349   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780
	
	I1009 19:11:05.113396   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.116625   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.117021   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.117049   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.117198   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.117349   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.117468   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.117570   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.117692   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.117857   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.117885   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:05.228123   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:05.228148   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:11:05.228172   28654 buildroot.go:174] setting up certificates
	I1009 19:11:05.228182   28654 provision.go:84] configureAuth start
	I1009 19:11:05.228189   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:05.228442   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.230797   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.231092   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.231117   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.231241   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.233255   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.233547   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.233569   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.233652   28654 provision.go:143] copyHostCerts
	I1009 19:11:05.233688   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:05.233736   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:11:05.233748   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:05.233826   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:11:05.233942   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:05.233970   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:11:05.233976   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:05.234005   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:11:05.234063   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:05.234084   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:11:05.234090   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:05.234111   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:11:05.234159   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780 san=[127.0.0.1 192.168.39.114 ha-199780 localhost minikube]
	I1009 19:11:05.299525   28654 provision.go:177] copyRemoteCerts
	I1009 19:11:05.299577   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:05.299597   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.301859   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.302122   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.302159   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.302298   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.302456   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.302593   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.302710   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.385328   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:11:05.385392   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:11:05.408377   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:11:05.408446   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:11:05.431231   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:11:05.431308   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:11:05.454941   28654 provision.go:87] duration metric: took 226.750506ms to configureAuth
	I1009 19:11:05.454965   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:11:05.455145   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:05.455206   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.457741   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.458006   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.458042   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.458216   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.458397   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.458525   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.458644   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.458788   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.458960   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.458976   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:05.676474   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:05.676512   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:11:05.676522   28654 main.go:141] libmachine: (ha-199780) Calling .GetURL
	I1009 19:11:05.677728   28654 main.go:141] libmachine: (ha-199780) DBG | Using libvirt version 6000000
	I1009 19:11:05.679755   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.680041   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.680069   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.680196   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:11:05.680210   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:11:05.680217   28654 client.go:171] duration metric: took 23.163762708s to LocalClient.Create
	I1009 19:11:05.680235   28654 start.go:167] duration metric: took 23.163818343s to libmachine.API.Create "ha-199780"
	I1009 19:11:05.680244   28654 start.go:293] postStartSetup for "ha-199780" (driver="kvm2")
	I1009 19:11:05.680255   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:05.680269   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.680459   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:05.680481   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.682388   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.682658   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.682683   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.682747   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.682909   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.683039   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.683197   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.767177   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:05.771701   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:11:05.771721   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:11:05.771790   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:11:05.771869   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:11:05.771881   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:11:05.771984   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:11:05.783287   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:05.808917   28654 start.go:296] duration metric: took 128.662808ms for postStartSetup
	I1009 19:11:05.808956   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:11:05.809504   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.812016   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.812350   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.812373   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.812566   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:05.812738   28654 start.go:128] duration metric: took 23.313722048s to createHost
	I1009 19:11:05.812762   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.814746   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.815037   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.815078   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.815176   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.815323   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.815479   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.815598   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.815737   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.815932   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.815953   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:11:05.919951   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501065.894358321
	
	I1009 19:11:05.919974   28654 fix.go:216] guest clock: 1728501065.894358321
	I1009 19:11:05.919982   28654 fix.go:229] Guest: 2024-10-09 19:11:05.894358321 +0000 UTC Remote: 2024-10-09 19:11:05.812750418 +0000 UTC m=+23.417944098 (delta=81.607903ms)
	I1009 19:11:05.920005   28654 fix.go:200] guest clock delta is within tolerance: 81.607903ms
	I1009 19:11:05.920012   28654 start.go:83] releasing machines lock for "ha-199780", held for 23.421078352s
	I1009 19:11:05.920035   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.920263   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.922615   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.922966   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.922995   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.923150   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923568   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923734   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923824   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:05.923862   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.924006   28654 ssh_runner.go:195] Run: cat /version.json
	I1009 19:11:05.924044   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.926446   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926648   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926765   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.926802   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926912   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.927037   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.927038   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.927086   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.927223   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.927272   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.927339   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.927433   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.927750   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.927897   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:06.024499   28654 ssh_runner.go:195] Run: systemctl --version
	I1009 19:11:06.030414   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:06.185061   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:06.191423   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:06.191490   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:06.206786   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:11:06.206805   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:11:06.206857   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:06.222401   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:06.235373   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:11:06.235433   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:06.247949   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:06.260686   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:06.376406   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:06.514646   28654 docker.go:233] disabling docker service ...
	I1009 19:11:06.514703   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:06.529298   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:06.542407   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:06.674904   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:06.805457   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:06.819076   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:06.839480   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:11:06.839538   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.851838   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:11:06.851893   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.864160   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.876368   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.889066   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:06.901093   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.912169   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.929058   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.939929   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:06.949542   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:11:06.949583   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:11:06.962939   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:06.972697   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:07.093662   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:07.192295   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:07.192352   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:07.197105   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:11:07.197162   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:11:07.200935   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:11:07.247609   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:11:07.247689   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:07.275380   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:07.304930   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:11:07.306083   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:07.308768   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:07.309094   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:07.309121   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:07.309303   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:07.313459   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:07.326691   28654 kubeadm.go:883] updating cluster {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:11:07.326798   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:11:07.326859   28654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:07.358942   28654 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 19:11:07.359000   28654 ssh_runner.go:195] Run: which lz4
	I1009 19:11:07.363007   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1009 19:11:07.363119   28654 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 19:11:07.367226   28654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 19:11:07.367262   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 19:11:08.682998   28654 crio.go:462] duration metric: took 1.319910565s to copy over tarball
	I1009 19:11:08.683082   28654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 19:11:10.661640   28654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978525541s)
	I1009 19:11:10.661674   28654 crio.go:469] duration metric: took 1.978647131s to extract the tarball
	I1009 19:11:10.661683   28654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 19:11:10.698452   28654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:10.744870   28654 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:11:10.744890   28654 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:11:10.744897   28654 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.31.1 crio true true} ...
	I1009 19:11:10.744976   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:10.745041   28654 ssh_runner.go:195] Run: crio config
	I1009 19:11:10.794773   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:11:10.794792   28654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:11:10.794807   28654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:11:10.794828   28654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-199780 NodeName:ha-199780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:11:10.794978   28654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-199780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:11:10.795005   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:11:10.795055   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:11:10.811512   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:11:10.811631   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:11:10.811693   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:10.821887   28654 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:11:10.821946   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:11:10.831583   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1009 19:11:10.848385   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:10.865617   28654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1009 19:11:10.882082   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1009 19:11:10.898198   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:10.902054   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:10.914494   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:11.043972   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:11.060509   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.114
	I1009 19:11:11.060533   28654 certs.go:194] generating shared ca certs ...
	I1009 19:11:11.060553   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.060728   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:11:11.060785   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:11:11.060798   28654 certs.go:256] generating profile certs ...
	I1009 19:11:11.060867   28654 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:11:11.060891   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt with IP's: []
	I1009 19:11:11.257901   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt ...
	I1009 19:11:11.257931   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt: {Name:mke6971132fee40da37bc72041e92dde05b5c360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.258111   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key ...
	I1009 19:11:11.258127   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key: {Name:mk2c48ceaf748f5efc5f062df1cf8bf8d38b626a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.258227   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621
	I1009 19:11:11.258246   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.254]
	I1009 19:11:11.502202   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 ...
	I1009 19:11:11.502241   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621: {Name:mk85bc5cf43d418e43d8be4b6611eb785caa9f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.502445   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621 ...
	I1009 19:11:11.502463   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621: {Name:mk1d94ea93b96fe750cd9f95170ab488ca016856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.502573   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:11:11.502721   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:11:11.502815   28654 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:11:11.502839   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt with IP's: []
	I1009 19:11:11.612443   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt ...
	I1009 19:11:11.612470   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt: {Name:mk212b018e6441944e189239707af3950678c689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.612646   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key ...
	I1009 19:11:11.612656   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key: {Name:mkb7f3d492b787f9b9b56d2b48939b9971f793ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.612724   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:11:11.612740   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:11:11.612751   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:11:11.612763   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:11:11.612774   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:11:11.612786   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:11:11.612798   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:11:11.612810   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:11:11.612864   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:11:11.612897   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:11.612903   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:11:11.612926   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:11:11.612951   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:11.612971   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:11:11.613006   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:11.613033   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.613046   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.613058   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:11:11.613596   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:11.638855   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:11:11.662787   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:11.686693   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:11:11.710429   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:11:11.734032   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:11.757651   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:11.781611   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:11.805128   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:11.831515   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:11:11.878516   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:11:11.903576   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:11:11.920589   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:11:11.926400   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:11.937651   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.942167   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.942223   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.947902   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:11.959013   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:11:11.970169   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.974738   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.974799   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.980430   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:11.991569   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:11:12.002421   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.006666   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.006711   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.012305   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:12.023435   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:12.027428   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:11:12.027474   28654 kubeadm.go:392] StartCluster: {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:12.027535   28654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:11:12.027572   28654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:11:12.068414   28654 cri.go:89] found id: ""
	I1009 19:11:12.068473   28654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:11:12.078653   28654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:11:12.088659   28654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:11:12.098391   28654 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:11:12.098408   28654 kubeadm.go:157] found existing configuration files:
	
	I1009 19:11:12.098445   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:11:12.107757   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:11:12.107807   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:11:12.117369   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:11:12.126789   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:11:12.126847   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:11:12.136637   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:11:12.146308   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:11:12.146364   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:11:12.156469   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:11:12.165834   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:11:12.165886   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:11:12.175515   28654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 19:11:12.280177   28654 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 19:11:12.280255   28654 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 19:11:12.386423   28654 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:11:12.386621   28654 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:11:12.386752   28654 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:11:12.404964   28654 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:11:12.482162   28654 out.go:235]   - Generating certificates and keys ...
	I1009 19:11:12.482262   28654 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 19:11:12.482346   28654 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 19:11:12.648552   28654 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:11:12.833455   28654 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:11:13.055850   28654 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:11:13.322371   28654 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 19:11:13.484433   28654 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 19:11:13.484631   28654 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-199780 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I1009 19:11:13.583799   28654 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 19:11:13.584031   28654 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-199780 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I1009 19:11:14.090538   28654 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:11:14.260812   28654 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:11:14.391262   28654 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 19:11:14.391369   28654 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:11:14.744340   28654 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:11:14.834478   28654 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:11:14.925339   28654 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:11:15.080024   28654 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:11:15.271189   28654 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:11:15.271810   28654 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:11:15.277194   28654 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:11:15.369554   28654 out.go:235]   - Booting up control plane ...
	I1009 19:11:15.369723   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:11:15.369842   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:11:15.369937   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:11:15.370057   28654 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:11:15.370148   28654 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:11:15.370183   28654 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 19:11:15.445224   28654 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:11:15.445341   28654 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:11:16.448580   28654 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005128821s
	I1009 19:11:16.448662   28654 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 19:11:22.061566   28654 kubeadm.go:310] [api-check] The API server is healthy after 5.61687232s
	I1009 19:11:22.078904   28654 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:11:22.108560   28654 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:11:22.646139   28654 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:11:22.646344   28654 kubeadm.go:310] [mark-control-plane] Marking the node ha-199780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:11:22.657702   28654 kubeadm.go:310] [bootstrap-token] Using token: n3skeb.bws3ifw22cumajmm
	I1009 19:11:22.659119   28654 out.go:235]   - Configuring RBAC rules ...
	I1009 19:11:22.659267   28654 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:11:22.664574   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:11:22.677942   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:11:22.681624   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:11:22.685155   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:11:22.689541   28654 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:11:22.705080   28654 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:11:22.957052   28654 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 19:11:23.469842   28654 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 19:11:23.470871   28654 kubeadm.go:310] 
	I1009 19:11:23.470925   28654 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 19:11:23.470933   28654 kubeadm.go:310] 
	I1009 19:11:23.471051   28654 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 19:11:23.471083   28654 kubeadm.go:310] 
	I1009 19:11:23.471125   28654 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 19:11:23.471223   28654 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:11:23.471271   28654 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:11:23.471296   28654 kubeadm.go:310] 
	I1009 19:11:23.471380   28654 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 19:11:23.471393   28654 kubeadm.go:310] 
	I1009 19:11:23.471455   28654 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:11:23.471464   28654 kubeadm.go:310] 
	I1009 19:11:23.471537   28654 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 19:11:23.471641   28654 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:11:23.471738   28654 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:11:23.471753   28654 kubeadm.go:310] 
	I1009 19:11:23.471870   28654 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:11:23.471974   28654 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 19:11:23.471984   28654 kubeadm.go:310] 
	I1009 19:11:23.472086   28654 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n3skeb.bws3ifw22cumajmm \
	I1009 19:11:23.472234   28654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 19:11:23.472263   28654 kubeadm.go:310] 	--control-plane 
	I1009 19:11:23.472276   28654 kubeadm.go:310] 
	I1009 19:11:23.472382   28654 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:11:23.472392   28654 kubeadm.go:310] 
	I1009 19:11:23.472488   28654 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n3skeb.bws3ifw22cumajmm \
	I1009 19:11:23.472616   28654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 19:11:23.473525   28654 kubeadm.go:310] W1009 19:11:12.257145     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:11:23.473837   28654 kubeadm.go:310] W1009 19:11:12.259703     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:11:23.473994   28654 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:11:23.474033   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:11:23.474046   28654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:11:23.475963   28654 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 19:11:23.477363   28654 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:11:23.483529   28654 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 19:11:23.483553   28654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:11:23.504303   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:11:23.863157   28654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:11:23.863274   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:23.863284   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780 minikube.k8s.io/updated_at=2024_10_09T19_11_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=true
	I1009 19:11:23.884152   28654 ops.go:34] apiserver oom_adj: -16
	I1009 19:11:24.005714   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:24.506374   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:25.006091   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:25.506438   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:26.006141   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:26.506040   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.006400   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.505831   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.598386   28654 kubeadm.go:1113] duration metric: took 3.735177044s to wait for elevateKubeSystemPrivileges
	I1009 19:11:27.598425   28654 kubeadm.go:394] duration metric: took 15.5709527s to StartCluster
	I1009 19:11:27.598446   28654 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:27.598527   28654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:11:27.599166   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:27.599347   28654 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:27.599374   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:11:27.599357   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:11:27.599375   28654 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:11:27.599458   28654 addons.go:69] Setting storage-provisioner=true in profile "ha-199780"
	I1009 19:11:27.599469   28654 addons.go:69] Setting default-storageclass=true in profile "ha-199780"
	I1009 19:11:27.599477   28654 addons.go:234] Setting addon storage-provisioner=true in "ha-199780"
	I1009 19:11:27.599485   28654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-199780"
	I1009 19:11:27.599503   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:27.599506   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:27.599886   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.599927   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.599929   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.599968   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.614342   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I1009 19:11:27.614587   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I1009 19:11:27.614820   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.615004   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.615360   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.615381   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.615494   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.615521   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.615770   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.615869   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.615936   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.616437   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.616482   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.618027   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:11:27.618409   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:11:27.618933   28654 cert_rotation.go:140] Starting client certificate rotation controller
	I1009 19:11:27.619199   28654 addons.go:234] Setting addon default-storageclass=true in "ha-199780"
	I1009 19:11:27.619240   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:27.619589   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.619644   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.631880   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I1009 19:11:27.632439   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.632953   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.632968   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.633306   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.633511   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.633650   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I1009 19:11:27.634127   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.634757   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.634777   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.635148   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.635306   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:27.635705   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.635747   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.637278   28654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:11:27.638972   28654 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:11:27.638992   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:11:27.639008   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:27.642192   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.642642   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:27.642674   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.642796   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:27.642968   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:27.643174   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:27.643344   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:27.651531   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I1009 19:11:27.652010   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.652633   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.652663   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.652996   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.653186   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.654702   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:27.654903   28654 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:11:27.654916   28654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:11:27.654931   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:27.657462   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.657809   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:27.657834   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.657997   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:27.658162   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:27.658275   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:27.658409   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:27.708249   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:11:27.824778   28654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:11:27.831460   28654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:11:28.120955   28654 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1009 19:11:28.573087   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573114   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573134   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573150   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573505   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573520   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573544   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573545   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573557   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573510   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573628   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573649   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573658   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573565   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573900   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573917   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573930   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573931   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573940   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573984   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.574002   28654 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:11:28.574017   28654 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:11:28.574123   28654 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1009 19:11:28.574129   28654 round_trippers.go:469] Request Headers:
	I1009 19:11:28.574140   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:11:28.574147   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:11:28.586337   28654 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1009 19:11:28.587207   28654 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1009 19:11:28.587225   28654 round_trippers.go:469] Request Headers:
	I1009 19:11:28.587233   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:11:28.587241   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:11:28.587251   28654 round_trippers.go:473]     Content-Type: application/json
	I1009 19:11:28.594277   28654 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 19:11:28.594441   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.594457   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.594703   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.594721   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.596581   28654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1009 19:11:28.597699   28654 addons.go:510] duration metric: took 998.327173ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 19:11:28.597726   28654 start.go:246] waiting for cluster config update ...
	I1009 19:11:28.597735   28654 start.go:255] writing updated cluster config ...
	I1009 19:11:28.599169   28654 out.go:201] 
	I1009 19:11:28.600456   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:28.600538   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:28.601965   28654 out.go:177] * Starting "ha-199780-m02" control-plane node in "ha-199780" cluster
	I1009 19:11:28.602974   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:11:28.602993   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:11:28.603093   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:11:28.603107   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:11:28.603182   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:28.603350   28654 start.go:360] acquireMachinesLock for ha-199780-m02: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:11:28.603394   28654 start.go:364] duration metric: took 25.364µs to acquireMachinesLock for "ha-199780-m02"
	I1009 19:11:28.603415   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:28.603505   28654 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1009 19:11:28.604883   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:11:28.604963   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:28.604996   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:28.620174   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I1009 19:11:28.620709   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:28.621235   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:28.621259   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:28.621551   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:28.621737   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:28.621880   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:28.622077   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:11:28.622107   28654 client.go:168] LocalClient.Create starting
	I1009 19:11:28.622146   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:11:28.622193   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:11:28.622213   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:11:28.622278   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:11:28.622306   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:11:28.622322   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:11:28.622345   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:11:28.622356   28654 main.go:141] libmachine: (ha-199780-m02) Calling .PreCreateCheck
	I1009 19:11:28.622534   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:28.622992   28654 main.go:141] libmachine: Creating machine...
	I1009 19:11:28.623009   28654 main.go:141] libmachine: (ha-199780-m02) Calling .Create
	I1009 19:11:28.623202   28654 main.go:141] libmachine: (ha-199780-m02) Creating KVM machine...
	I1009 19:11:28.624414   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found existing default KVM network
	I1009 19:11:28.624553   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found existing private KVM network mk-ha-199780
	I1009 19:11:28.624697   28654 main.go:141] libmachine: (ha-199780-m02) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 ...
	I1009 19:11:28.624717   28654 main.go:141] libmachine: (ha-199780-m02) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:11:28.627180   28654 main.go:141] libmachine: (ha-199780-m02) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:11:28.627222   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:28.624673   29017 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:11:28.859004   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:28.858864   29017 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa...
	I1009 19:11:29.192250   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:29.192144   29017 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/ha-199780-m02.rawdisk...
	I1009 19:11:29.192281   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Writing magic tar header
	I1009 19:11:29.192291   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Writing SSH key tar header
	I1009 19:11:29.192299   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:29.192250   29017 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 ...
	I1009 19:11:29.192353   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02
	I1009 19:11:29.192372   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:11:29.192385   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 (perms=drwx------)
	I1009 19:11:29.192398   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:11:29.192410   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:11:29.192419   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:11:29.192426   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:11:29.192433   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home
	I1009 19:11:29.192451   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Skipping /home - not owner
	I1009 19:11:29.192471   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:11:29.192484   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:11:29.192493   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:11:29.192501   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:11:29.192508   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:11:29.192515   28654 main.go:141] libmachine: (ha-199780-m02) Creating domain...
	I1009 19:11:29.193313   28654 main.go:141] libmachine: (ha-199780-m02) define libvirt domain using xml: 
	I1009 19:11:29.193342   28654 main.go:141] libmachine: (ha-199780-m02) <domain type='kvm'>
	I1009 19:11:29.193353   28654 main.go:141] libmachine: (ha-199780-m02)   <name>ha-199780-m02</name>
	I1009 19:11:29.193360   28654 main.go:141] libmachine: (ha-199780-m02)   <memory unit='MiB'>2200</memory>
	I1009 19:11:29.193368   28654 main.go:141] libmachine: (ha-199780-m02)   <vcpu>2</vcpu>
	I1009 19:11:29.193381   28654 main.go:141] libmachine: (ha-199780-m02)   <features>
	I1009 19:11:29.193404   28654 main.go:141] libmachine: (ha-199780-m02)     <acpi/>
	I1009 19:11:29.193418   28654 main.go:141] libmachine: (ha-199780-m02)     <apic/>
	I1009 19:11:29.193448   28654 main.go:141] libmachine: (ha-199780-m02)     <pae/>
	I1009 19:11:29.193470   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193481   28654 main.go:141] libmachine: (ha-199780-m02)   </features>
	I1009 19:11:29.193502   28654 main.go:141] libmachine: (ha-199780-m02)   <cpu mode='host-passthrough'>
	I1009 19:11:29.193521   28654 main.go:141] libmachine: (ha-199780-m02)   
	I1009 19:11:29.193531   28654 main.go:141] libmachine: (ha-199780-m02)   </cpu>
	I1009 19:11:29.193548   28654 main.go:141] libmachine: (ha-199780-m02)   <os>
	I1009 19:11:29.193569   28654 main.go:141] libmachine: (ha-199780-m02)     <type>hvm</type>
	I1009 19:11:29.193584   28654 main.go:141] libmachine: (ha-199780-m02)     <boot dev='cdrom'/>
	I1009 19:11:29.193597   28654 main.go:141] libmachine: (ha-199780-m02)     <boot dev='hd'/>
	I1009 19:11:29.193605   28654 main.go:141] libmachine: (ha-199780-m02)     <bootmenu enable='no'/>
	I1009 19:11:29.193614   28654 main.go:141] libmachine: (ha-199780-m02)   </os>
	I1009 19:11:29.193622   28654 main.go:141] libmachine: (ha-199780-m02)   <devices>
	I1009 19:11:29.193631   28654 main.go:141] libmachine: (ha-199780-m02)     <disk type='file' device='cdrom'>
	I1009 19:11:29.193644   28654 main.go:141] libmachine: (ha-199780-m02)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/boot2docker.iso'/>
	I1009 19:11:29.193658   28654 main.go:141] libmachine: (ha-199780-m02)       <target dev='hdc' bus='scsi'/>
	I1009 19:11:29.193669   28654 main.go:141] libmachine: (ha-199780-m02)       <readonly/>
	I1009 19:11:29.193678   28654 main.go:141] libmachine: (ha-199780-m02)     </disk>
	I1009 19:11:29.193692   28654 main.go:141] libmachine: (ha-199780-m02)     <disk type='file' device='disk'>
	I1009 19:11:29.193703   28654 main.go:141] libmachine: (ha-199780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:11:29.193717   28654 main.go:141] libmachine: (ha-199780-m02)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/ha-199780-m02.rawdisk'/>
	I1009 19:11:29.193731   28654 main.go:141] libmachine: (ha-199780-m02)       <target dev='hda' bus='virtio'/>
	I1009 19:11:29.193743   28654 main.go:141] libmachine: (ha-199780-m02)     </disk>
	I1009 19:11:29.193752   28654 main.go:141] libmachine: (ha-199780-m02)     <interface type='network'>
	I1009 19:11:29.193764   28654 main.go:141] libmachine: (ha-199780-m02)       <source network='mk-ha-199780'/>
	I1009 19:11:29.193774   28654 main.go:141] libmachine: (ha-199780-m02)       <model type='virtio'/>
	I1009 19:11:29.193784   28654 main.go:141] libmachine: (ha-199780-m02)     </interface>
	I1009 19:11:29.193794   28654 main.go:141] libmachine: (ha-199780-m02)     <interface type='network'>
	I1009 19:11:29.193805   28654 main.go:141] libmachine: (ha-199780-m02)       <source network='default'/>
	I1009 19:11:29.193820   28654 main.go:141] libmachine: (ha-199780-m02)       <model type='virtio'/>
	I1009 19:11:29.193833   28654 main.go:141] libmachine: (ha-199780-m02)     </interface>
	I1009 19:11:29.193841   28654 main.go:141] libmachine: (ha-199780-m02)     <serial type='pty'>
	I1009 19:11:29.193855   28654 main.go:141] libmachine: (ha-199780-m02)       <target port='0'/>
	I1009 19:11:29.193865   28654 main.go:141] libmachine: (ha-199780-m02)     </serial>
	I1009 19:11:29.193871   28654 main.go:141] libmachine: (ha-199780-m02)     <console type='pty'>
	I1009 19:11:29.193881   28654 main.go:141] libmachine: (ha-199780-m02)       <target type='serial' port='0'/>
	I1009 19:11:29.193890   28654 main.go:141] libmachine: (ha-199780-m02)     </console>
	I1009 19:11:29.193901   28654 main.go:141] libmachine: (ha-199780-m02)     <rng model='virtio'>
	I1009 19:11:29.193911   28654 main.go:141] libmachine: (ha-199780-m02)       <backend model='random'>/dev/random</backend>
	I1009 19:11:29.193933   28654 main.go:141] libmachine: (ha-199780-m02)     </rng>
	I1009 19:11:29.193946   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193962   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193978   28654 main.go:141] libmachine: (ha-199780-m02)   </devices>
	I1009 19:11:29.193990   28654 main.go:141] libmachine: (ha-199780-m02) </domain>
	I1009 19:11:29.193999   28654 main.go:141] libmachine: (ha-199780-m02) 
	I1009 19:11:29.200233   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:9f:20:14 in network default
	I1009 19:11:29.200751   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring networks are active...
	I1009 19:11:29.200778   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:29.201355   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring network default is active
	I1009 19:11:29.201602   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring network mk-ha-199780 is active
	I1009 19:11:29.201876   28654 main.go:141] libmachine: (ha-199780-m02) Getting domain xml...
	I1009 19:11:29.202487   28654 main.go:141] libmachine: (ha-199780-m02) Creating domain...
	I1009 19:11:30.395985   28654 main.go:141] libmachine: (ha-199780-m02) Waiting to get IP...
	I1009 19:11:30.396850   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.397221   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.397245   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.397192   29017 retry.go:31] will retry after 306.623748ms: waiting for machine to come up
	I1009 19:11:30.705681   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.706111   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.706142   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.706073   29017 retry.go:31] will retry after 272.886306ms: waiting for machine to come up
	I1009 19:11:30.980636   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.981119   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.981146   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.981081   29017 retry.go:31] will retry after 373.250902ms: waiting for machine to come up
	I1009 19:11:31.355561   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:31.355953   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:31.355981   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:31.355905   29017 retry.go:31] will retry after 402.386513ms: waiting for machine to come up
	I1009 19:11:31.759650   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:31.760178   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:31.760204   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:31.760143   29017 retry.go:31] will retry after 700.718844ms: waiting for machine to come up
	I1009 19:11:32.462533   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:32.462970   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:32.462999   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:32.462916   29017 retry.go:31] will retry after 892.701908ms: waiting for machine to come up
	I1009 19:11:33.357278   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:33.357677   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:33.357700   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:33.357645   29017 retry.go:31] will retry after 892.900741ms: waiting for machine to come up
	I1009 19:11:34.252184   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:34.252581   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:34.252605   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:34.252542   29017 retry.go:31] will retry after 919.729577ms: waiting for machine to come up
	I1009 19:11:35.174060   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:35.174445   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:35.174475   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:35.174422   29017 retry.go:31] will retry after 1.688669614s: waiting for machine to come up
	I1009 19:11:36.865075   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:36.865384   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:36.865412   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:36.865340   29017 retry.go:31] will retry after 1.768384485s: waiting for machine to come up
	I1009 19:11:38.635106   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:38.635545   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:38.635574   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:38.635487   29017 retry.go:31] will retry after 2.193559284s: waiting for machine to come up
	I1009 19:11:40.831238   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:40.831740   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:40.831780   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:40.831709   29017 retry.go:31] will retry after 3.434402997s: waiting for machine to come up
	I1009 19:11:44.267146   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:44.267644   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:44.267671   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:44.267602   29017 retry.go:31] will retry after 4.164642466s: waiting for machine to come up
	I1009 19:11:48.436657   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:48.436991   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:48.437015   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:48.436952   29017 retry.go:31] will retry after 3.860630111s: waiting for machine to come up
	I1009 19:11:52.302118   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.302487   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has current primary IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.302554   28654 main.go:141] libmachine: (ha-199780-m02) Found IP for machine: 192.168.39.83
	I1009 19:11:52.302579   28654 main.go:141] libmachine: (ha-199780-m02) Reserving static IP address...
	I1009 19:11:52.302886   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find host DHCP lease matching {name: "ha-199780-m02", mac: "52:54:00:49:9d:cf", ip: "192.168.39.83"} in network mk-ha-199780
	I1009 19:11:52.372076   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Getting to WaitForSSH function...
	I1009 19:11:52.372102   28654 main.go:141] libmachine: (ha-199780-m02) Reserved static IP address: 192.168.39.83
	I1009 19:11:52.372115   28654 main.go:141] libmachine: (ha-199780-m02) Waiting for SSH to be available...
	I1009 19:11:52.374841   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.375419   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.375450   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.375560   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using SSH client type: external
	I1009 19:11:52.375580   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa (-rw-------)
	I1009 19:11:52.375612   28654 main.go:141] libmachine: (ha-199780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:11:52.375635   28654 main.go:141] libmachine: (ha-199780-m02) DBG | About to run SSH command:
	I1009 19:11:52.375646   28654 main.go:141] libmachine: (ha-199780-m02) DBG | exit 0
	I1009 19:11:52.498886   28654 main.go:141] libmachine: (ha-199780-m02) DBG | SSH cmd err, output: <nil>: 
	I1009 19:11:52.499168   28654 main.go:141] libmachine: (ha-199780-m02) KVM machine creation complete!
	I1009 19:11:52.499479   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:52.500069   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:52.500241   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:52.500393   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:11:52.500411   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetState
	I1009 19:11:52.501707   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:11:52.501728   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:11:52.501749   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:11:52.501756   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.503758   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.504142   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.504165   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.504286   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.504437   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.504575   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.504686   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.504794   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.504979   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.504989   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:11:52.602177   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:52.602204   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:11:52.602213   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.604728   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.605107   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.605141   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.605291   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.605469   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.605606   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.605724   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.605872   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.606034   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.606045   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:11:52.703707   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:11:52.703764   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:11:52.703771   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:11:52.703777   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.704032   28654 buildroot.go:166] provisioning hostname "ha-199780-m02"
	I1009 19:11:52.704060   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.704231   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.706798   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.707185   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.707208   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.707350   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.707510   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.707650   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.707773   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.707888   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.708063   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.708075   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780-m02 && echo "ha-199780-m02" | sudo tee /etc/hostname
	I1009 19:11:52.823258   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780-m02
	
	I1009 19:11:52.823287   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.825577   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.825861   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.825888   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.826053   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.826228   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.826361   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.826462   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.826604   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.826970   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.827005   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:52.936284   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:52.936322   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:11:52.936338   28654 buildroot.go:174] setting up certificates
	I1009 19:11:52.936349   28654 provision.go:84] configureAuth start
	I1009 19:11:52.936358   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.936621   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:52.939014   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.939357   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.939378   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.939565   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.941751   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.942083   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.942102   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.942262   28654 provision.go:143] copyHostCerts
	I1009 19:11:52.942292   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:52.942326   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:11:52.942335   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:52.942400   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:11:52.942490   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:52.942507   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:11:52.942513   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:52.942543   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:11:52.942586   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:52.942603   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:11:52.942608   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:52.942630   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:11:52.942675   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780-m02 san=[127.0.0.1 192.168.39.83 ha-199780-m02 localhost minikube]
	I1009 19:11:53.040172   28654 provision.go:177] copyRemoteCerts
	I1009 19:11:53.040224   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:53.040246   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.042771   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.043144   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.043165   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.043339   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.043536   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.043695   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.043830   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.125536   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:11:53.125611   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:11:53.152398   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:11:53.152462   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:11:53.176418   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:11:53.176476   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:11:53.199215   28654 provision.go:87] duration metric: took 262.855174ms to configureAuth
	I1009 19:11:53.199238   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:11:53.199408   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:53.199489   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.202051   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.202440   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.202470   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.202579   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.202742   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.202905   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.203044   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.203213   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:53.203367   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:53.203381   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:53.429894   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:53.429922   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:11:53.429933   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetURL
	I1009 19:11:53.431192   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using libvirt version 6000000
	I1009 19:11:53.433633   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.433917   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.433942   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.434095   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:11:53.434111   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:11:53.434119   28654 client.go:171] duration metric: took 24.812002035s to LocalClient.Create
	I1009 19:11:53.434141   28654 start.go:167] duration metric: took 24.812066243s to libmachine.API.Create "ha-199780"
	I1009 19:11:53.434153   28654 start.go:293] postStartSetup for "ha-199780-m02" (driver="kvm2")
	I1009 19:11:53.434164   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:53.434178   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.434386   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:53.434414   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.436444   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.436741   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.436766   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.436885   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.437048   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.437204   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.437329   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.517247   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:53.521546   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:11:53.521570   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:11:53.521628   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:11:53.521696   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:11:53.521706   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:11:53.521794   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:11:53.531170   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:53.555463   28654 start.go:296] duration metric: took 121.295956ms for postStartSetup
	I1009 19:11:53.555509   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:53.556089   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:53.558610   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.558965   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.558990   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.559241   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:53.559417   28654 start.go:128] duration metric: took 24.955894473s to createHost
	I1009 19:11:53.559436   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.561758   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.562120   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.562145   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.562297   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.562466   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.562603   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.562686   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.562800   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:53.562944   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:53.562953   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:11:53.659740   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501113.618380735
	
	I1009 19:11:53.659761   28654 fix.go:216] guest clock: 1728501113.618380735
	I1009 19:11:53.659770   28654 fix.go:229] Guest: 2024-10-09 19:11:53.618380735 +0000 UTC Remote: 2024-10-09 19:11:53.559427397 +0000 UTC m=+71.164621077 (delta=58.953338ms)
	I1009 19:11:53.659789   28654 fix.go:200] guest clock delta is within tolerance: 58.953338ms
	I1009 19:11:53.659795   28654 start.go:83] releasing machines lock for "ha-199780-m02", held for 25.056389443s
	I1009 19:11:53.659818   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.660047   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:53.662723   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.663038   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.663084   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.665166   28654 out.go:177] * Found network options:
	I1009 19:11:53.666287   28654 out.go:177]   - NO_PROXY=192.168.39.114
	W1009 19:11:53.667466   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:11:53.667505   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.667962   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.668130   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.668248   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:53.668296   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	W1009 19:11:53.668300   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:11:53.668381   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:53.668416   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.670930   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671210   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671283   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.671304   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671447   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.671527   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.671552   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671587   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.671735   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.671750   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.671893   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.671912   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.672014   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.672148   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.899517   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:53.905678   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:53.905741   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:53.922185   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:11:53.922206   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:11:53.922263   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:53.937820   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:53.953029   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:11:53.953091   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:53.967078   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:53.981025   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:54.113745   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:54.255530   28654 docker.go:233] disabling docker service ...
	I1009 19:11:54.255587   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:54.270170   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:54.283110   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:54.427830   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:54.542861   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:54.559019   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:54.577775   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:11:54.577834   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.588489   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:11:54.588563   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.598988   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.609116   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.619104   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:54.629621   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.640002   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.656572   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.666994   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:54.677176   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:11:54.677232   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:11:54.689637   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:54.698765   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:54.819897   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:54.911734   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:54.911789   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:54.916451   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:11:54.916494   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:11:54.920158   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:11:54.955402   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:11:54.955480   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:54.982980   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:55.012563   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:11:55.013723   28654 out.go:177]   - env NO_PROXY=192.168.39.114
	I1009 19:11:55.014768   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:55.017153   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:55.017506   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:55.017538   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:55.017692   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:55.021943   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:55.034196   28654 mustload.go:65] Loading cluster: ha-199780
	I1009 19:11:55.034432   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:55.034865   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:55.034912   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:55.049583   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I1009 19:11:55.050018   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:55.050467   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:55.050491   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:55.050776   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:55.050944   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:55.052331   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:55.052611   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:55.052643   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:55.066531   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I1009 19:11:55.066862   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:55.067348   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:55.067376   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:55.067659   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:55.067826   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:55.067945   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.83
	I1009 19:11:55.067956   28654 certs.go:194] generating shared ca certs ...
	I1009 19:11:55.067973   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.068103   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:11:55.068159   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:11:55.068171   28654 certs.go:256] generating profile certs ...
	I1009 19:11:55.068256   28654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:11:55.068286   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0
	I1009 19:11:55.068307   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.254]
	I1009 19:11:55.274614   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 ...
	I1009 19:11:55.274645   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0: {Name:mkea8c047205788ccead22201bc77c7190717cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.274816   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0 ...
	I1009 19:11:55.274832   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0: {Name:mk98b6fcd80ec856f6c63ddb6177c8a08e2dbf7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.274920   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:11:55.275082   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:11:55.275255   28654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:11:55.275273   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:11:55.275291   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:11:55.275308   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:11:55.275327   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:11:55.275347   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:11:55.275366   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:11:55.275383   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:11:55.275401   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:11:55.275466   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:11:55.275511   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:55.275524   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:11:55.275558   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:11:55.275590   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:55.275622   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:11:55.275679   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:55.275720   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.275740   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.275758   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.275797   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:55.278862   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:55.279369   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:55.279395   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:55.279612   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:55.279780   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:55.279952   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:55.280049   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:55.351381   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:11:55.355961   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:11:55.367055   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:11:55.371613   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:11:55.382154   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:11:55.386133   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:11:55.395984   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:11:55.399714   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1009 19:11:55.409621   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:11:55.413853   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:11:55.423766   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:11:55.427525   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1009 19:11:55.437575   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:55.462624   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:11:55.485719   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:55.508128   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:11:55.530803   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 19:11:55.555486   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:55.580139   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:55.603207   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:55.626373   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:11:55.649676   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:11:55.673656   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:55.696721   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:11:55.712647   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:11:55.728611   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:11:55.744619   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1009 19:11:55.760726   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:11:55.776763   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1009 19:11:55.792315   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:11:55.807929   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:11:55.813442   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:55.823376   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.827581   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.827627   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.833072   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:55.842843   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:11:55.852649   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.856766   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.856802   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.862146   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:55.872016   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:11:55.881805   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.885859   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.885905   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.891246   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:55.901096   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:55.904965   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:11:55.905009   28654 kubeadm.go:934] updating node {m02 192.168.39.83 8443 v1.31.1 crio true true} ...
	I1009 19:11:55.905077   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:55.905098   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:11:55.905121   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:11:55.919709   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:11:55.919759   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:11:55.919801   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:55.929228   28654 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1009 19:11:55.929276   28654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:55.938319   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1009 19:11:55.938340   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:11:55.938391   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:11:55.938402   28654 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1009 19:11:55.938404   28654 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1009 19:11:55.942635   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1009 19:11:55.942660   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1009 19:11:57.241263   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:11:57.255221   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:11:57.255304   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:11:57.259158   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1009 19:11:57.259186   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1009 19:11:57.547794   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:11:57.547883   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:11:57.562384   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1009 19:11:57.562426   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1009 19:11:57.842477   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:11:57.852027   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 19:11:57.867591   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:57.883108   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:11:57.898843   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:57.902642   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:57.914959   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:58.028127   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:58.044965   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:58.045423   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:58.045473   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:58.059986   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I1009 19:11:58.060458   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:58.060917   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:58.060934   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:58.061238   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:58.061410   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:58.061538   28654 start.go:317] joinCluster: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:58.061653   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 19:11:58.061673   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:58.064589   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:58.064969   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:58.064994   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:58.065152   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:58.065308   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:58.065538   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:58.065661   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:58.210321   28654 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:58.210383   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kfwmel.rmtx9gjzbnc80w0m --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m02 --control-plane --apiserver-advertise-address=192.168.39.83 --apiserver-bind-port=8443"
	I1009 19:12:19.134246   28654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kfwmel.rmtx9gjzbnc80w0m --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m02 --control-plane --apiserver-advertise-address=192.168.39.83 --apiserver-bind-port=8443": (20.923839028s)
	I1009 19:12:19.134290   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 19:12:19.605010   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780-m02 minikube.k8s.io/updated_at=2024_10_09T19_12_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=false
	I1009 19:12:19.748442   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-199780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1009 19:12:19.868185   28654 start.go:319] duration metric: took 21.806636434s to joinCluster
	I1009 19:12:19.868265   28654 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:12:19.868592   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:12:19.870842   28654 out.go:177] * Verifying Kubernetes components...
	I1009 19:12:19.872112   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:12:20.132051   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:12:20.184872   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:12:20.185127   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:12:20.185184   28654 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I1009 19:12:20.185366   28654 node_ready.go:35] waiting up to 6m0s for node "ha-199780-m02" to be "Ready" ...
	I1009 19:12:20.185447   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:20.185457   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:20.185464   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:20.185468   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:20.196121   28654 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1009 19:12:20.685641   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:20.685666   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:20.685677   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:20.685683   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:20.700948   28654 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1009 19:12:21.186360   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:21.186379   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:21.186386   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:21.186390   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:21.190077   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:21.686495   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:21.686523   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:21.686535   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:21.686542   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:21.689757   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:22.185915   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:22.185938   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:22.185949   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:22.185955   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:22.189220   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:22.189830   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:22.685885   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:22.685909   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:22.685925   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:22.685930   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:22.692565   28654 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 19:12:23.186131   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:23.186153   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:23.186163   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:23.186170   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:23.190703   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:23.685823   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:23.685851   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:23.685864   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:23.685874   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:23.689295   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:24.186259   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:24.186290   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:24.186302   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:24.186308   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:24.190419   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:24.190953   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:24.686386   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:24.686405   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:24.686412   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:24.686418   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:24.689349   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:25.186405   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:25.186431   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:25.186443   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:25.186448   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:25.189677   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:25.685894   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:25.685917   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:25.685930   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:25.685938   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:25.688721   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:26.185700   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:26.185718   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:26.185725   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:26.185729   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:26.189091   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:26.686200   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:26.686219   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:26.686227   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:26.686233   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:26.691177   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:26.691800   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:27.186166   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:27.186200   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:27.186216   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:27.186227   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:27.208799   28654 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1009 19:12:27.686569   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:27.686596   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:27.686606   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:27.686611   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:27.690120   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:28.186542   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:28.186562   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:28.186570   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:28.186574   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:28.189659   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:28.685814   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:28.685834   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:28.685842   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:28.685846   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:28.689015   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:29.185658   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:29.185692   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:29.185703   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:29.185708   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:29.188963   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:29.189656   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:29.686079   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:29.686104   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:29.686115   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:29.686119   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:29.689437   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:30.186344   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:30.186367   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:30.186378   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:30.186384   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:30.189946   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:30.685870   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:30.685896   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:30.685904   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:30.685909   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:30.689100   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:31.186316   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:31.186342   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:31.186351   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:31.186356   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:31.189992   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:31.190453   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:31.685857   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:31.685878   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:31.685886   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:31.685890   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:31.689411   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:32.186413   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:32.186439   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:32.186450   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:32.186457   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:32.189297   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:32.686105   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:32.686126   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:32.686134   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:32.686138   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:32.689698   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.185993   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:33.186015   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:33.186024   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:33.186028   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:33.189373   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.685932   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:33.685955   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:33.685963   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:33.685968   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:33.689670   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.690285   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:34.185640   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:34.185662   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:34.185670   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:34.185674   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:34.188694   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:34.686203   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:34.686223   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:34.686231   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:34.686235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:34.690146   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:35.185607   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:35.185628   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:35.185636   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:35.185640   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:35.188854   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:35.685726   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:35.685746   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:35.685759   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:35.685764   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:35.689172   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:36.186278   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:36.186301   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:36.186308   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:36.186312   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:36.189767   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:36.190519   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:36.685809   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:36.685841   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:36.685849   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:36.685853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:36.688923   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:37.185894   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:37.185920   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:37.185933   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:37.185940   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:37.189465   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:37.686197   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:37.686222   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:37.686230   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:37.686235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:37.689394   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.185922   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:38.185948   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:38.185956   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:38.185961   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:38.189255   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.685706   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:38.685729   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:38.685742   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:38.685751   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:38.689204   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.689971   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:39.186413   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.186433   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.186447   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.186452   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.189522   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.190154   28654 node_ready.go:49] node "ha-199780-m02" has status "Ready":"True"
	I1009 19:12:39.190172   28654 node_ready.go:38] duration metric: took 19.004790985s for node "ha-199780-m02" to be "Ready" ...
	I1009 19:12:39.190183   28654 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:12:39.190256   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:39.190268   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.190277   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.190292   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.194625   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:39.201057   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.201129   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-r8lg7
	I1009 19:12:39.201137   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.201144   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.201149   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.203552   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.204277   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.204291   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.204298   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.204303   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.206434   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.207017   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.207033   28654 pod_ready.go:82] duration metric: took 5.954504ms for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.207041   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.207118   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v5k75
	I1009 19:12:39.207128   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.207139   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.207148   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.209367   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.210180   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.210198   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.210204   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.210207   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.212254   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.212911   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.212929   28654 pod_ready.go:82] duration metric: took 5.881939ms for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.212939   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.212996   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780
	I1009 19:12:39.213004   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.213010   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.213014   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.215519   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.216198   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.216212   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.216222   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.216228   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.218680   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.219274   28654 pod_ready.go:93] pod "etcd-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.219293   28654 pod_ready.go:82] duration metric: took 6.345815ms for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.219306   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.219361   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m02
	I1009 19:12:39.219370   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.219379   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.219388   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.222905   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.223852   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.223867   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.223874   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.223880   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.226122   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.226546   28654 pod_ready.go:93] pod "etcd-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.226559   28654 pod_ready.go:82] duration metric: took 7.244216ms for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.226571   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.386954   28654 request.go:632] Waited for 160.312334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:12:39.387019   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:12:39.387028   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.387041   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.387059   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.390052   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.587135   28654 request.go:632] Waited for 196.31885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.587196   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.587203   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.587211   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.587219   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.590448   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.591164   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.591183   28654 pod_ready.go:82] duration metric: took 364.606313ms for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.591192   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.787247   28654 request.go:632] Waited for 195.987261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:12:39.787328   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:12:39.787335   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.787346   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.787354   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.790620   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.986772   28654 request.go:632] Waited for 195.363358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.986825   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.986830   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.986837   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.986840   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.990003   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.990664   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.990682   28654 pod_ready.go:82] duration metric: took 399.483816ms for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.990691   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.186433   28654 request.go:632] Waited for 195.681011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:12:40.186513   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:12:40.186524   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.186535   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.186544   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.189683   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.386818   28654 request.go:632] Waited for 196.355604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:40.386887   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:40.386893   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.386900   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.386905   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.391133   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:40.391614   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:40.391638   28654 pod_ready.go:82] duration metric: took 400.93972ms for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.391651   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.586680   28654 request.go:632] Waited for 194.949325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:12:40.586737   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:12:40.586742   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.586750   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.586755   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.590444   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.786422   28654 request.go:632] Waited for 195.280915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:40.786495   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:40.786501   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.786509   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.786513   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.790326   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.791006   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:40.791029   28654 pod_ready.go:82] duration metric: took 399.365639ms for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.791046   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.987070   28654 request.go:632] Waited for 195.933748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:12:40.987131   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:12:40.987136   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.987143   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.987147   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.990605   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.186624   28654 request.go:632] Waited for 195.268606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.186692   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.186704   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.186711   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.186715   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.189956   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.190470   28654 pod_ready.go:93] pod "kube-proxy-n8ffq" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.190489   28654 pod_ready.go:82] duration metric: took 399.435329ms for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.190501   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.386649   28654 request.go:632] Waited for 196.07336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:12:41.386701   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:12:41.386706   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.386713   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.386716   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.390032   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.587033   28654 request.go:632] Waited for 196.334104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:41.587126   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:41.587138   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.587149   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.587167   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.590021   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:41.590641   28654 pod_ready.go:93] pod "kube-proxy-zfsq8" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.590663   28654 pod_ready.go:82] duration metric: took 400.153892ms for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.590678   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.786648   28654 request.go:632] Waited for 195.890444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:12:41.786701   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:12:41.786708   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.786719   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.786729   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.789369   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:41.987345   28654 request.go:632] Waited for 197.361828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.987411   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.987416   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.987424   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.987427   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.990745   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.991278   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.991294   28654 pod_ready.go:82] duration metric: took 400.607782ms for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.991303   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:42.187413   28654 request.go:632] Waited for 196.036626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:12:42.187472   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:12:42.187478   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.187488   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.187495   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.190480   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:42.386422   28654 request.go:632] Waited for 195.271897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:42.386476   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:42.386482   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.386489   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.386493   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.389175   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:42.389733   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:42.389754   28654 pod_ready.go:82] duration metric: took 398.44435ms for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:42.389768   28654 pod_ready.go:39] duration metric: took 3.199572136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:12:42.389785   28654 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:12:42.389849   28654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:42.407811   28654 api_server.go:72] duration metric: took 22.539512335s to wait for apiserver process to appear ...
	I1009 19:12:42.407834   28654 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:12:42.407855   28654 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1009 19:12:42.414877   28654 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1009 19:12:42.414962   28654 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I1009 19:12:42.414974   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.414984   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.414991   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.416098   28654 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 19:12:42.416185   28654 api_server.go:141] control plane version: v1.31.1
	I1009 19:12:42.416202   28654 api_server.go:131] duration metric: took 8.360977ms to wait for apiserver health ...
	I1009 19:12:42.416212   28654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:12:42.587017   28654 request.go:632] Waited for 170.742751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.587127   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.587142   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.587151   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.587157   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.592323   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:12:42.596935   28654 system_pods.go:59] 17 kube-system pods found
	I1009 19:12:42.596960   28654 system_pods.go:61] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:12:42.596966   28654 system_pods.go:61] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:12:42.596971   28654 system_pods.go:61] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:12:42.596974   28654 system_pods.go:61] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:12:42.596977   28654 system_pods.go:61] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:12:42.596980   28654 system_pods.go:61] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:12:42.596983   28654 system_pods.go:61] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:12:42.596991   28654 system_pods.go:61] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:12:42.596995   28654 system_pods.go:61] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:12:42.597000   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:12:42.597004   28654 system_pods.go:61] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:12:42.597007   28654 system_pods.go:61] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:12:42.597011   28654 system_pods.go:61] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:12:42.597015   28654 system_pods.go:61] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:12:42.597018   28654 system_pods.go:61] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:12:42.597023   28654 system_pods.go:61] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:12:42.597026   28654 system_pods.go:61] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:12:42.597031   28654 system_pods.go:74] duration metric: took 180.813466ms to wait for pod list to return data ...
	I1009 19:12:42.597039   28654 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:12:42.787461   28654 request.go:632] Waited for 190.355387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:12:42.787510   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:12:42.787515   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.787523   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.787526   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.791707   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:42.791908   28654 default_sa.go:45] found service account: "default"
	I1009 19:12:42.791921   28654 default_sa.go:55] duration metric: took 194.876803ms for default service account to be created ...
	I1009 19:12:42.791929   28654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:12:42.987347   28654 request.go:632] Waited for 195.347718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.987402   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.987407   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.987415   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.987418   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.992125   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:42.996490   28654 system_pods.go:86] 17 kube-system pods found
	I1009 19:12:42.996520   28654 system_pods.go:89] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:12:42.996536   28654 system_pods.go:89] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:12:42.996541   28654 system_pods.go:89] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:12:42.996545   28654 system_pods.go:89] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:12:42.996552   28654 system_pods.go:89] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:12:42.996564   28654 system_pods.go:89] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:12:42.996567   28654 system_pods.go:89] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:12:42.996571   28654 system_pods.go:89] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:12:42.996576   28654 system_pods.go:89] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:12:42.996580   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:12:42.996583   28654 system_pods.go:89] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:12:42.996587   28654 system_pods.go:89] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:12:42.996591   28654 system_pods.go:89] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:12:42.996594   28654 system_pods.go:89] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:12:42.996598   28654 system_pods.go:89] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:12:42.996603   28654 system_pods.go:89] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:12:42.996605   28654 system_pods.go:89] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:12:42.996612   28654 system_pods.go:126] duration metric: took 204.678176ms to wait for k8s-apps to be running ...
	I1009 19:12:42.996621   28654 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:12:42.996661   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:12:43.012943   28654 system_svc.go:56] duration metric: took 16.312977ms WaitForService to wait for kubelet
	I1009 19:12:43.012964   28654 kubeadm.go:582] duration metric: took 23.14466791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:12:43.012979   28654 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:12:43.186683   28654 request.go:632] Waited for 173.643549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I1009 19:12:43.186731   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I1009 19:12:43.186737   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:43.186744   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:43.186750   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:43.190743   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:43.191568   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:12:43.191597   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:12:43.191608   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:12:43.191612   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:12:43.191618   28654 node_conditions.go:105] duration metric: took 178.633815ms to run NodePressure ...
	I1009 19:12:43.191635   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:12:43.191663   28654 start.go:255] writing updated cluster config ...
	I1009 19:12:43.193878   28654 out.go:201] 
	I1009 19:12:43.195204   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:12:43.195296   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:12:43.196947   28654 out.go:177] * Starting "ha-199780-m03" control-plane node in "ha-199780" cluster
	I1009 19:12:43.198242   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:12:43.198257   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:12:43.198354   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:12:43.198368   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:12:43.198453   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:12:43.198644   28654 start.go:360] acquireMachinesLock for ha-199780-m03: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:12:43.198693   28654 start.go:364] duration metric: took 30.243µs to acquireMachinesLock for "ha-199780-m03"
	I1009 19:12:43.198715   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:12:43.198839   28654 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1009 19:12:43.200292   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:12:43.200365   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:12:43.200395   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:12:43.215501   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I1009 19:12:43.215883   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:12:43.216432   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:12:43.216461   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:12:43.216780   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:12:43.216973   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:12:43.217128   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:12:43.217269   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:12:43.217296   28654 client.go:168] LocalClient.Create starting
	I1009 19:12:43.217327   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:12:43.217360   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:12:43.217379   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:12:43.217439   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:12:43.217464   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:12:43.217486   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:12:43.217518   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:12:43.217529   28654 main.go:141] libmachine: (ha-199780-m03) Calling .PreCreateCheck
	I1009 19:12:43.217680   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:12:43.218031   28654 main.go:141] libmachine: Creating machine...
	I1009 19:12:43.218043   28654 main.go:141] libmachine: (ha-199780-m03) Calling .Create
	I1009 19:12:43.218158   28654 main.go:141] libmachine: (ha-199780-m03) Creating KVM machine...
	I1009 19:12:43.219370   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found existing default KVM network
	I1009 19:12:43.219545   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found existing private KVM network mk-ha-199780
	I1009 19:12:43.219670   28654 main.go:141] libmachine: (ha-199780-m03) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 ...
	I1009 19:12:43.219694   28654 main.go:141] libmachine: (ha-199780-m03) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:12:43.219770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.219647   29426 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:12:43.219839   28654 main.go:141] libmachine: (ha-199780-m03) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:12:43.456571   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.456478   29426 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa...
	I1009 19:12:43.637087   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.637007   29426 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/ha-199780-m03.rawdisk...
	I1009 19:12:43.637111   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Writing magic tar header
	I1009 19:12:43.637123   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Writing SSH key tar header
	I1009 19:12:43.637132   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.637111   29426 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 ...
	I1009 19:12:43.637237   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03
	I1009 19:12:43.637256   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 (perms=drwx------)
	I1009 19:12:43.637263   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:12:43.637277   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:12:43.637285   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:12:43.637293   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:12:43.637301   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:12:43.637308   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:12:43.637313   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home
	I1009 19:12:43.637322   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Skipping /home - not owner
	I1009 19:12:43.637330   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:12:43.637338   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:12:43.637345   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:12:43.637355   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:12:43.637364   28654 main.go:141] libmachine: (ha-199780-m03) Creating domain...
	I1009 19:12:43.638194   28654 main.go:141] libmachine: (ha-199780-m03) define libvirt domain using xml: 
	I1009 19:12:43.638216   28654 main.go:141] libmachine: (ha-199780-m03) <domain type='kvm'>
	I1009 19:12:43.638226   28654 main.go:141] libmachine: (ha-199780-m03)   <name>ha-199780-m03</name>
	I1009 19:12:43.638239   28654 main.go:141] libmachine: (ha-199780-m03)   <memory unit='MiB'>2200</memory>
	I1009 19:12:43.638251   28654 main.go:141] libmachine: (ha-199780-m03)   <vcpu>2</vcpu>
	I1009 19:12:43.638258   28654 main.go:141] libmachine: (ha-199780-m03)   <features>
	I1009 19:12:43.638266   28654 main.go:141] libmachine: (ha-199780-m03)     <acpi/>
	I1009 19:12:43.638275   28654 main.go:141] libmachine: (ha-199780-m03)     <apic/>
	I1009 19:12:43.638288   28654 main.go:141] libmachine: (ha-199780-m03)     <pae/>
	I1009 19:12:43.638296   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638304   28654 main.go:141] libmachine: (ha-199780-m03)   </features>
	I1009 19:12:43.638314   28654 main.go:141] libmachine: (ha-199780-m03)   <cpu mode='host-passthrough'>
	I1009 19:12:43.638338   28654 main.go:141] libmachine: (ha-199780-m03)   
	I1009 19:12:43.638360   28654 main.go:141] libmachine: (ha-199780-m03)   </cpu>
	I1009 19:12:43.638375   28654 main.go:141] libmachine: (ha-199780-m03)   <os>
	I1009 19:12:43.638386   28654 main.go:141] libmachine: (ha-199780-m03)     <type>hvm</type>
	I1009 19:12:43.638397   28654 main.go:141] libmachine: (ha-199780-m03)     <boot dev='cdrom'/>
	I1009 19:12:43.638406   28654 main.go:141] libmachine: (ha-199780-m03)     <boot dev='hd'/>
	I1009 19:12:43.638416   28654 main.go:141] libmachine: (ha-199780-m03)     <bootmenu enable='no'/>
	I1009 19:12:43.638425   28654 main.go:141] libmachine: (ha-199780-m03)   </os>
	I1009 19:12:43.638435   28654 main.go:141] libmachine: (ha-199780-m03)   <devices>
	I1009 19:12:43.638451   28654 main.go:141] libmachine: (ha-199780-m03)     <disk type='file' device='cdrom'>
	I1009 19:12:43.638468   28654 main.go:141] libmachine: (ha-199780-m03)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/boot2docker.iso'/>
	I1009 19:12:43.638480   28654 main.go:141] libmachine: (ha-199780-m03)       <target dev='hdc' bus='scsi'/>
	I1009 19:12:43.638491   28654 main.go:141] libmachine: (ha-199780-m03)       <readonly/>
	I1009 19:12:43.638498   28654 main.go:141] libmachine: (ha-199780-m03)     </disk>
	I1009 19:12:43.638511   28654 main.go:141] libmachine: (ha-199780-m03)     <disk type='file' device='disk'>
	I1009 19:12:43.638529   28654 main.go:141] libmachine: (ha-199780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:12:43.638545   28654 main.go:141] libmachine: (ha-199780-m03)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/ha-199780-m03.rawdisk'/>
	I1009 19:12:43.638557   28654 main.go:141] libmachine: (ha-199780-m03)       <target dev='hda' bus='virtio'/>
	I1009 19:12:43.638566   28654 main.go:141] libmachine: (ha-199780-m03)     </disk>
	I1009 19:12:43.638575   28654 main.go:141] libmachine: (ha-199780-m03)     <interface type='network'>
	I1009 19:12:43.638585   28654 main.go:141] libmachine: (ha-199780-m03)       <source network='mk-ha-199780'/>
	I1009 19:12:43.638600   28654 main.go:141] libmachine: (ha-199780-m03)       <model type='virtio'/>
	I1009 19:12:43.638613   28654 main.go:141] libmachine: (ha-199780-m03)     </interface>
	I1009 19:12:43.638624   28654 main.go:141] libmachine: (ha-199780-m03)     <interface type='network'>
	I1009 19:12:43.638637   28654 main.go:141] libmachine: (ha-199780-m03)       <source network='default'/>
	I1009 19:12:43.638647   28654 main.go:141] libmachine: (ha-199780-m03)       <model type='virtio'/>
	I1009 19:12:43.638658   28654 main.go:141] libmachine: (ha-199780-m03)     </interface>
	I1009 19:12:43.638665   28654 main.go:141] libmachine: (ha-199780-m03)     <serial type='pty'>
	I1009 19:12:43.638685   28654 main.go:141] libmachine: (ha-199780-m03)       <target port='0'/>
	I1009 19:12:43.638701   28654 main.go:141] libmachine: (ha-199780-m03)     </serial>
	I1009 19:12:43.638713   28654 main.go:141] libmachine: (ha-199780-m03)     <console type='pty'>
	I1009 19:12:43.638724   28654 main.go:141] libmachine: (ha-199780-m03)       <target type='serial' port='0'/>
	I1009 19:12:43.638734   28654 main.go:141] libmachine: (ha-199780-m03)     </console>
	I1009 19:12:43.638742   28654 main.go:141] libmachine: (ha-199780-m03)     <rng model='virtio'>
	I1009 19:12:43.638760   28654 main.go:141] libmachine: (ha-199780-m03)       <backend model='random'>/dev/random</backend>
	I1009 19:12:43.638775   28654 main.go:141] libmachine: (ha-199780-m03)     </rng>
	I1009 19:12:43.638786   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638796   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638812   28654 main.go:141] libmachine: (ha-199780-m03)   </devices>
	I1009 19:12:43.638828   28654 main.go:141] libmachine: (ha-199780-m03) </domain>
	I1009 19:12:43.638836   28654 main.go:141] libmachine: (ha-199780-m03) 
	I1009 19:12:43.645429   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:1f:d1:3b in network default
	I1009 19:12:43.645983   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:43.646001   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring networks are active...
	I1009 19:12:43.646747   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring network default is active
	I1009 19:12:43.647149   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring network mk-ha-199780 is active
	I1009 19:12:43.647523   28654 main.go:141] libmachine: (ha-199780-m03) Getting domain xml...
	I1009 19:12:43.648287   28654 main.go:141] libmachine: (ha-199780-m03) Creating domain...
	I1009 19:12:44.847549   28654 main.go:141] libmachine: (ha-199780-m03) Waiting to get IP...
	I1009 19:12:44.848392   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:44.848787   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:44.848829   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:44.848770   29426 retry.go:31] will retry after 229.997293ms: waiting for machine to come up
	I1009 19:12:45.079971   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.080455   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.080486   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.080421   29426 retry.go:31] will retry after 304.992826ms: waiting for machine to come up
	I1009 19:12:45.386902   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.387362   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.387386   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.387322   29426 retry.go:31] will retry after 327.958718ms: waiting for machine to come up
	I1009 19:12:45.716733   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.717214   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.717239   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.717174   29426 retry.go:31] will retry after 508.576077ms: waiting for machine to come up
	I1009 19:12:46.227904   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:46.228327   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:46.228353   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:46.228287   29426 retry.go:31] will retry after 585.555609ms: waiting for machine to come up
	I1009 19:12:46.814896   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:46.815296   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:46.815326   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:46.815257   29426 retry.go:31] will retry after 940.877771ms: waiting for machine to come up
	I1009 19:12:47.757334   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:47.757733   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:47.757767   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:47.757680   29426 retry.go:31] will retry after 1.078987913s: waiting for machine to come up
	I1009 19:12:48.838156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:48.838584   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:48.838612   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:48.838534   29426 retry.go:31] will retry after 1.204337562s: waiting for machine to come up
	I1009 19:12:50.044036   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:50.044425   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:50.044447   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:50.044387   29426 retry.go:31] will retry after 1.424565558s: waiting for machine to come up
	I1009 19:12:51.470825   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:51.471291   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:51.471328   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:51.471250   29426 retry.go:31] will retry after 1.95975676s: waiting for machine to come up
	I1009 19:12:53.432604   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:53.433116   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:53.433142   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:53.433070   29426 retry.go:31] will retry after 2.780245822s: waiting for machine to come up
	I1009 19:12:56.216025   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:56.216374   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:56.216395   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:56.216337   29426 retry.go:31] will retry after 3.28653641s: waiting for machine to come up
	I1009 19:12:59.504791   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:59.505156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:59.505184   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:59.505128   29426 retry.go:31] will retry after 4.186849932s: waiting for machine to come up
	I1009 19:13:03.693337   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:03.693747   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:13:03.693770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:13:03.693703   29426 retry.go:31] will retry after 5.146937605s: waiting for machine to come up
	I1009 19:13:08.842460   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.842868   28654 main.go:141] libmachine: (ha-199780-m03) Found IP for machine: 192.168.39.84
	I1009 19:13:08.842887   28654 main.go:141] libmachine: (ha-199780-m03) Reserving static IP address...
	I1009 19:13:08.842896   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.843320   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find host DHCP lease matching {name: "ha-199780-m03", mac: "52:54:00:15:92:44", ip: "192.168.39.84"} in network mk-ha-199780
	I1009 19:13:08.913543   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Getting to WaitForSSH function...
	I1009 19:13:08.913573   28654 main.go:141] libmachine: (ha-199780-m03) Reserved static IP address: 192.168.39.84
	I1009 19:13:08.913586   28654 main.go:141] libmachine: (ha-199780-m03) Waiting for SSH to be available...
	I1009 19:13:08.916270   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.916658   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:92:44}
	I1009 19:13:08.916682   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.916805   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using SSH client type: external
	I1009 19:13:08.916829   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa (-rw-------)
	I1009 19:13:08.916873   28654 main.go:141] libmachine: (ha-199780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:13:08.916898   28654 main.go:141] libmachine: (ha-199780-m03) DBG | About to run SSH command:
	I1009 19:13:08.916914   28654 main.go:141] libmachine: (ha-199780-m03) DBG | exit 0
	I1009 19:13:09.046941   28654 main.go:141] libmachine: (ha-199780-m03) DBG | SSH cmd err, output: <nil>: 
	I1009 19:13:09.047218   28654 main.go:141] libmachine: (ha-199780-m03) KVM machine creation complete!
	I1009 19:13:09.047540   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:13:09.048076   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:09.048290   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:09.048435   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:13:09.048449   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetState
	I1009 19:13:09.049768   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:13:09.049784   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:13:09.049792   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:13:09.049800   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.051899   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.052232   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.052256   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.052390   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.052558   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.052690   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.052792   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.052919   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.053134   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.053146   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:13:09.162161   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:13:09.162193   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:13:09.162204   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.165282   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.165740   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.165770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.165998   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.166189   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.166372   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.166511   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.166658   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.166820   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.166830   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:13:09.279803   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:13:09.279876   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:13:09.279888   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:13:09.279896   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.280130   28654 buildroot.go:166] provisioning hostname "ha-199780-m03"
	I1009 19:13:09.280155   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.280355   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.282543   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.282879   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.282903   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.283031   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.283188   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.283335   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.283479   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.283637   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.283800   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.283813   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780-m03 && echo "ha-199780-m03" | sudo tee /etc/hostname
	I1009 19:13:09.410249   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780-m03
	
	I1009 19:13:09.410286   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.413156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.413573   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.413597   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.413831   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.414036   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.414189   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.414350   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.414484   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.414653   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.414676   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:13:09.536419   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:13:09.536443   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:13:09.536456   28654 buildroot.go:174] setting up certificates
	I1009 19:13:09.536466   28654 provision.go:84] configureAuth start
	I1009 19:13:09.536474   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.536766   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:09.539383   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.539742   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.539769   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.539905   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.542068   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.542398   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.542433   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.542583   28654 provision.go:143] copyHostCerts
	I1009 19:13:09.542606   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:13:09.542633   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:13:09.542642   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:13:09.542706   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:13:09.542776   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:13:09.542794   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:13:09.542798   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:13:09.542825   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:13:09.542870   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:13:09.542886   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:13:09.542891   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:13:09.542910   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:13:09.542956   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780-m03 san=[127.0.0.1 192.168.39.84 ha-199780-m03 localhost minikube]
	I1009 19:13:09.606712   28654 provision.go:177] copyRemoteCerts
	I1009 19:13:09.606761   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:13:09.606781   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.609303   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.609661   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.609689   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.609868   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.610022   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.610145   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.610298   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:09.696779   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:13:09.696841   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:13:09.720751   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:13:09.720811   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:13:09.744059   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:13:09.744114   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:13:09.767833   28654 provision.go:87] duration metric: took 231.356763ms to configureAuth
	I1009 19:13:09.767867   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:13:09.768111   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:09.768195   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.770602   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.770927   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.770956   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.771124   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.771314   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.771473   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.771621   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.771780   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.771973   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.772002   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:13:09.999632   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:13:09.999662   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:13:09.999673   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetURL
	I1009 19:13:10.001043   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using libvirt version 6000000
	I1009 19:13:10.002982   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.003339   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.003364   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.003485   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:13:10.003499   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:13:10.003506   28654 client.go:171] duration metric: took 26.786200346s to LocalClient.Create
	I1009 19:13:10.003528   28654 start.go:167] duration metric: took 26.786259048s to libmachine.API.Create "ha-199780"
	I1009 19:13:10.003541   28654 start.go:293] postStartSetup for "ha-199780-m03" (driver="kvm2")
	I1009 19:13:10.003557   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:13:10.003580   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.003751   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:13:10.003777   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.005954   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.006305   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.006342   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.006472   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.006621   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.006781   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.006914   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.097042   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:13:10.101538   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:13:10.101559   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:13:10.101628   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:13:10.101716   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:13:10.101727   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:13:10.101831   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:13:10.111544   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:13:10.138321   28654 start.go:296] duration metric: took 134.764482ms for postStartSetup
	I1009 19:13:10.138362   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:13:10.138886   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:10.141464   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.141752   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.141798   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.142045   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:13:10.142239   28654 start.go:128] duration metric: took 26.94338984s to createHost
	I1009 19:13:10.142260   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.144573   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.144860   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.144895   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.145048   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.145233   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.145397   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.145561   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.145727   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:10.145915   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:10.145928   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:13:10.259958   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501190.239755663
	
	I1009 19:13:10.259981   28654 fix.go:216] guest clock: 1728501190.239755663
	I1009 19:13:10.259990   28654 fix.go:229] Guest: 2024-10-09 19:13:10.239755663 +0000 UTC Remote: 2024-10-09 19:13:10.142249873 +0000 UTC m=+147.747443556 (delta=97.50579ms)
	I1009 19:13:10.260009   28654 fix.go:200] guest clock delta is within tolerance: 97.50579ms
	I1009 19:13:10.260014   28654 start.go:83] releasing machines lock for "ha-199780-m03", held for 27.061310572s
	I1009 19:13:10.260031   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.260248   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:10.262692   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.263042   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.263090   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.265368   28654 out.go:177] * Found network options:
	I1009 19:13:10.266603   28654 out.go:177]   - NO_PROXY=192.168.39.114,192.168.39.83
	W1009 19:13:10.267719   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 19:13:10.267740   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:13:10.267752   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268176   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268354   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268457   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:13:10.268495   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	W1009 19:13:10.268522   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 19:13:10.268539   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:13:10.268607   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:13:10.268629   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.271001   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271378   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.271413   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271433   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271563   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.271675   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.271760   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.271841   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.271883   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.271905   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.272050   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.272201   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.272349   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.272499   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.509806   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:13:10.515665   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:13:10.515723   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:13:10.534296   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:13:10.534319   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:13:10.534372   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:13:10.550041   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:13:10.563633   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:13:10.563683   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:13:10.577637   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:13:10.592588   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:13:10.712305   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:13:10.879292   28654 docker.go:233] disabling docker service ...
	I1009 19:13:10.879381   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:13:10.894134   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:13:10.907059   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:13:11.025068   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:13:11.146057   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:13:11.160573   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:13:11.181994   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:13:11.182045   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.191765   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:13:11.191812   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.201883   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.212073   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.222390   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:13:11.232857   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.243298   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.262217   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.272906   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:13:11.282747   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:13:11.282797   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:13:11.296609   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:13:11.306096   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:11.423441   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:13:11.515740   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:13:11.515821   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:13:11.520647   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:13:11.520700   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:13:11.524288   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:13:11.564050   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:13:11.564119   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:13:11.592463   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:13:11.620536   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:13:11.622484   28654 out.go:177]   - env NO_PROXY=192.168.39.114
	I1009 19:13:11.623769   28654 out.go:177]   - env NO_PROXY=192.168.39.114,192.168.39.83
	I1009 19:13:11.624794   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:11.627494   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:11.627836   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:11.627861   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:11.628050   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:13:11.632057   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:13:11.644307   28654 mustload.go:65] Loading cluster: ha-199780
	I1009 19:13:11.644526   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:11.644823   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:11.644864   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:11.660098   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I1009 19:13:11.660500   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:11.660929   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:11.660963   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:11.661312   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:11.661490   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:13:11.662965   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:13:11.663268   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:11.663304   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:11.677584   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I1009 19:13:11.678002   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:11.678412   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:11.678433   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:11.678716   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:11.678874   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:13:11.678992   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.84
	I1009 19:13:11.679002   28654 certs.go:194] generating shared ca certs ...
	I1009 19:13:11.679014   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.679142   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:13:11.679180   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:13:11.679190   28654 certs.go:256] generating profile certs ...
	I1009 19:13:11.679253   28654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:13:11.679275   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8
	I1009 19:13:11.679293   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.84 192.168.39.254]
	I1009 19:13:11.751003   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 ...
	I1009 19:13:11.751029   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8: {Name:mkf155e8357b65010528843e053f2a71f20ad105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.751190   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8 ...
	I1009 19:13:11.751202   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8: {Name:mk6ff6d5eec7167bd850e69dc06edb50691eb6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.751267   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:13:11.751393   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:13:11.751509   28654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:13:11.751523   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:13:11.751535   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:13:11.751550   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:13:11.751563   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:13:11.751576   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:13:11.751588   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:13:11.751600   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:13:11.771159   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:13:11.771229   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:13:11.771259   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:13:11.771269   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:13:11.771293   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:13:11.771314   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:13:11.771335   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:13:11.771370   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:13:11.771395   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:13:11.771408   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:11.771420   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:13:11.771451   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:13:11.774438   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:11.774845   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:13:11.774865   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:11.775017   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:13:11.775204   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:13:11.775350   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:13:11.775478   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:13:11.851359   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:13:11.856664   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:13:11.868123   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:13:11.875260   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:13:11.887341   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:13:11.891724   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:13:11.902332   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:13:11.906621   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1009 19:13:11.916908   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:13:11.921562   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:13:11.931584   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:13:11.935971   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1009 19:13:11.946941   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:13:11.972757   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:13:11.996080   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:13:12.019624   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:13:12.042711   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1009 19:13:12.067239   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:13:12.094118   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:13:12.120234   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:13:12.143055   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:13:12.165868   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:13:12.188853   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:13:12.211293   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:13:12.227623   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:13:12.243623   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:13:12.260811   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1009 19:13:12.278131   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:13:12.295237   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1009 19:13:12.312441   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:13:12.328516   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:13:12.334428   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:13:12.345201   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.349589   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.349627   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.355741   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:13:12.366097   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:13:12.376756   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.381423   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.381474   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.387265   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:13:12.398550   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:13:12.410065   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.414879   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.414939   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.420521   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:13:12.431459   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:13:12.435599   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:13:12.435653   28654 kubeadm.go:934] updating node {m03 192.168.39.84 8443 v1.31.1 crio true true} ...
	I1009 19:13:12.435745   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:13:12.435776   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:13:12.435816   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:13:12.450815   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:13:12.450880   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:13:12.450927   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:13:12.462732   28654 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1009 19:13:12.462797   28654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1009 19:13:12.473333   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1009 19:13:12.473358   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:13:12.473356   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1009 19:13:12.473375   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:13:12.473392   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1009 19:13:12.473419   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:13:12.473431   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:13:12.473433   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:13:12.484568   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1009 19:13:12.484600   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1009 19:13:12.496090   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:13:12.496156   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1009 19:13:12.496169   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:13:12.496179   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1009 19:13:12.547231   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1009 19:13:12.547271   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1009 19:13:13.298298   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:13:13.308347   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 19:13:13.325500   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:13:13.341701   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:13:13.358009   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:13:13.361852   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:13:13.374963   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:13.498686   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:13:13.518977   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:13:13.519473   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:13.519531   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:13.538200   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I1009 19:13:13.538624   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:13.539117   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:13.539147   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:13.539481   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:13.539662   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:13:13.539788   28654 start.go:317] joinCluster: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:13:13.539943   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 19:13:13.539967   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:13:13.542836   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:13.543274   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:13:13.543303   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:13.543418   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:13:13.543577   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:13:13.543722   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:13:13.543861   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:13:13.700075   28654 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:13:13.700122   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d41j1t.hzhz2w4cpck4u6sv --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I1009 19:13:36.009706   28654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d41j1t.hzhz2w4cpck4u6sv --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (22.309560416s)
	I1009 19:13:36.009741   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 19:13:36.574647   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780-m03 minikube.k8s.io/updated_at=2024_10_09T19_13_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=false
	I1009 19:13:36.718344   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-199780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1009 19:13:36.828582   28654 start.go:319] duration metric: took 23.288789983s to joinCluster
	I1009 19:13:36.828663   28654 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:13:36.828971   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:36.830104   28654 out.go:177] * Verifying Kubernetes components...
	I1009 19:13:36.831350   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:37.149519   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:13:37.192508   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:13:37.192892   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:13:37.192972   28654 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I1009 19:13:37.193248   28654 node_ready.go:35] waiting up to 6m0s for node "ha-199780-m03" to be "Ready" ...
	I1009 19:13:37.193328   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:37.193338   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:37.193350   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:37.193359   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:37.197001   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:37.693747   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:37.693768   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:37.693780   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:37.693785   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:37.697648   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:38.193891   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:38.193913   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:38.193924   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:38.193929   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:38.197274   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:38.693429   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:38.693457   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:38.693469   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:38.693474   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:38.696864   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:39.193488   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:39.193508   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:39.193514   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:39.193519   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:39.196227   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:39.196768   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:39.694269   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:39.694294   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:39.694306   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:39.694313   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:39.697293   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:40.193909   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:40.193938   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:40.193948   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:40.193953   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:40.197226   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:40.693770   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:40.693793   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:40.693804   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:40.693809   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:40.697070   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:41.194260   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:41.194282   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:41.194291   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:41.194295   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:41.197138   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:41.197715   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:41.694049   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:41.694075   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:41.694087   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:41.694094   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:41.697134   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:42.194287   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:42.194311   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:42.194321   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:42.194327   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:42.197589   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:42.693552   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:42.693571   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:42.693581   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:42.693588   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:42.696963   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:43.193761   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:43.193786   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:43.193798   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:43.193806   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:43.197438   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:43.198158   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:43.693694   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:43.693716   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:43.693724   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:43.693728   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:43.697267   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:44.193683   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:44.193704   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:44.193711   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:44.193715   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:44.197056   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:44.693897   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:44.693918   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:44.693928   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:44.693933   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:44.696914   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:45.193775   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:45.193795   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:45.193803   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:45.193807   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:45.197164   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:45.694421   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:45.694444   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:45.694455   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:45.694461   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:45.697506   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:45.698052   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:46.193428   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:46.193455   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:46.193486   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:46.193492   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:46.197151   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:46.693979   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:46.693997   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:46.694013   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:46.694017   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:46.697611   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:47.193578   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:47.193600   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:47.193607   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:47.193611   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:47.197105   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:47.693781   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:47.693802   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:47.693813   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:47.693817   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:47.696934   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:48.194335   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:48.194358   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:48.194365   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:48.194368   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:48.198434   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:13:48.199180   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:48.693737   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:48.693758   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:48.693768   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:48.693773   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:48.697344   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:49.193432   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:49.193451   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:49.193459   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:49.193463   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:49.196304   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:49.694364   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:49.694385   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:49.694396   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:49.694403   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:49.697486   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.193397   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:50.193418   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:50.193431   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:50.193435   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:50.197076   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.693831   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:50.693856   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:50.693867   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:50.693873   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:50.697369   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.698284   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:51.194258   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:51.194282   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:51.194289   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:51.194294   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:51.197449   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:51.694317   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:51.694339   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:51.694350   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:51.694356   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:51.698049   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:52.194018   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:52.194043   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:52.194052   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:52.194061   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:52.197494   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:52.694202   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:52.694224   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:52.694232   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:52.694236   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:52.697227   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:53.193702   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:53.193722   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:53.193729   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:53.193733   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:53.196923   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:53.197555   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:53.694135   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:53.694158   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:53.694166   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:53.694172   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:53.697390   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:54.193409   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:54.193427   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.193439   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.193443   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.195968   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.693832   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:54.693853   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.693861   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.693866   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.696718   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.697386   28654 node_ready.go:49] node "ha-199780-m03" has status "Ready":"True"
	I1009 19:13:54.697405   28654 node_ready.go:38] duration metric: took 17.504141075s for node "ha-199780-m03" to be "Ready" ...
	I1009 19:13:54.697413   28654 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:13:54.697463   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:13:54.697471   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.697479   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.697484   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.703461   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:13:54.710054   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.710118   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-r8lg7
	I1009 19:13:54.710126   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.710133   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.710136   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.712863   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.713585   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.713602   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.713609   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.713613   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.715857   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.716501   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.716519   28654 pod_ready.go:82] duration metric: took 6.443501ms for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.716529   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.716578   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v5k75
	I1009 19:13:54.716586   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.716593   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.716599   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.718834   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.719475   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.719490   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.719499   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.719505   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.721592   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.722022   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.722036   28654 pod_ready.go:82] duration metric: took 5.49901ms for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.722045   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.722092   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780
	I1009 19:13:54.722102   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.722111   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.722117   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.724132   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.724537   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.724549   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.724558   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.724564   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.726416   28654 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 19:13:54.726760   28654 pod_ready.go:93] pod "etcd-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.726774   28654 pod_ready.go:82] duration metric: took 4.721439ms for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.726783   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.726829   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m02
	I1009 19:13:54.726838   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.726847   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.726853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.728868   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.729481   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:54.729499   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.729510   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.729515   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.731574   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.732095   28654 pod_ready.go:93] pod "etcd-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.732112   28654 pod_ready.go:82] duration metric: took 5.322203ms for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.732123   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.894472   28654 request.go:632] Waited for 162.298544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m03
	I1009 19:13:54.894602   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m03
	I1009 19:13:54.894612   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.894619   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.894623   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.897741   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.094188   28654 request.go:632] Waited for 195.683908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:55.094240   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:55.094246   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.094253   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.094258   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.097407   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.098074   28654 pod_ready.go:93] pod "etcd-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.098090   28654 pod_ready.go:82] duration metric: took 365.959261ms for pod "etcd-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.098111   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.294211   28654 request.go:632] Waited for 196.026886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:13:55.294264   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:13:55.294270   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.294277   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.294281   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.297814   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.494347   28654 request.go:632] Waited for 195.288987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:55.494396   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:55.494400   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.494409   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.494414   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.497640   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.498264   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.498282   28654 pod_ready.go:82] duration metric: took 400.159789ms for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.498295   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.694371   28654 request.go:632] Waited for 196.007868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:13:55.694438   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:13:55.694444   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.694452   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.694457   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.697453   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:55.894821   28654 request.go:632] Waited for 196.365606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:55.894877   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:55.894894   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.894903   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.894908   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.898105   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.898641   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.898656   28654 pod_ready.go:82] duration metric: took 400.354565ms for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.898665   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.094875   28654 request.go:632] Waited for 196.142376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m03
	I1009 19:13:56.094943   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m03
	I1009 19:13:56.094953   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.094962   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.094969   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.098488   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.294812   28654 request.go:632] Waited for 195.339632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:56.294879   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:56.294886   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.294897   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.294905   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.298371   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.299243   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:56.299268   28654 pod_ready.go:82] duration metric: took 400.59742ms for pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.299278   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.494432   28654 request.go:632] Waited for 195.083743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:13:56.494487   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:13:56.494493   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.494503   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.494508   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.498203   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.694515   28654 request.go:632] Waited for 195.651266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:56.694569   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:56.694574   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.694582   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.694589   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.697903   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.698503   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:56.698524   28654 pod_ready.go:82] duration metric: took 399.235411ms for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.698534   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.894604   28654 request.go:632] Waited for 196.010295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:13:56.894690   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:13:56.894699   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.894709   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.894725   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.897698   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:57.094771   28654 request.go:632] Waited for 196.347164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:57.094830   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:57.094837   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.094846   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.094853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.097915   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.098466   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.098483   28654 pod_ready.go:82] duration metric: took 399.942607ms for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.098496   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.294694   28654 request.go:632] Waited for 196.107304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m03
	I1009 19:13:57.294760   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m03
	I1009 19:13:57.294768   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.294778   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.294791   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.298281   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.493859   28654 request.go:632] Waited for 194.862003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.493928   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.493933   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.493941   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.493945   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.497771   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.498530   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.498546   28654 pod_ready.go:82] duration metric: took 400.036948ms for pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.498556   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cltcd" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.694138   28654 request.go:632] Waited for 195.506846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cltcd
	I1009 19:13:57.694198   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cltcd
	I1009 19:13:57.694204   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.694211   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.694217   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.698240   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:13:57.894301   28654 request.go:632] Waited for 195.370676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.894370   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.894377   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.894391   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.894398   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.897846   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.898728   28654 pod_ready.go:93] pod "kube-proxy-cltcd" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.898745   28654 pod_ready.go:82] duration metric: took 400.184495ms for pod "kube-proxy-cltcd" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.898756   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.094244   28654 request.go:632] Waited for 195.417272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:13:58.094320   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:13:58.094332   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.094339   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.094343   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.098070   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.294156   28654 request.go:632] Waited for 195.371857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:58.294219   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:58.294226   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.294237   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.294245   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.297391   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.297856   28654 pod_ready.go:93] pod "kube-proxy-n8ffq" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:58.297872   28654 pod_ready.go:82] duration metric: took 399.106499ms for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.297884   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.493870   28654 request.go:632] Waited for 195.913549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:13:58.493922   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:13:58.493927   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.493937   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.493944   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.497117   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.694489   28654 request.go:632] Waited for 196.566825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:58.694545   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:58.694552   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.694563   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.694568   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.697679   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.698297   28654 pod_ready.go:93] pod "kube-proxy-zfsq8" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:58.698312   28654 pod_ready.go:82] duration metric: took 400.419475ms for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.698322   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.894499   28654 request.go:632] Waited for 196.088891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:13:58.894585   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:13:58.894592   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.894603   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.894613   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.897964   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.094228   28654 request.go:632] Waited for 195.366071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:59.094310   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:59.094322   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.094333   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.094342   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.097557   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.098186   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.098207   28654 pod_ready.go:82] duration metric: took 399.878488ms for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.098219   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.294278   28654 request.go:632] Waited for 195.983419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:13:59.294332   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:13:59.294338   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.294345   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.294350   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.297821   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.493975   28654 request.go:632] Waited for 195.208037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:59.494031   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:59.494036   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.494044   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.494049   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.501563   28654 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 19:13:59.502080   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.502097   28654 pod_ready.go:82] duration metric: took 403.868133ms for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.502106   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.694192   28654 request.go:632] Waited for 192.028751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m03
	I1009 19:13:59.694247   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m03
	I1009 19:13:59.694253   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.694260   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.694264   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.697180   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:59.894169   28654 request.go:632] Waited for 196.350026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:59.894218   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:59.894223   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.894230   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.894235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.897240   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:59.897806   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.897823   28654 pod_ready.go:82] duration metric: took 395.71123ms for pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.897835   28654 pod_ready.go:39] duration metric: took 5.200413633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:13:59.897849   28654 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:13:59.897900   28654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:59.914617   28654 api_server.go:72] duration metric: took 23.08591673s to wait for apiserver process to appear ...
	I1009 19:13:59.914639   28654 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:13:59.914655   28654 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1009 19:13:59.918628   28654 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1009 19:13:59.918679   28654 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I1009 19:13:59.918686   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.918696   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.918706   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.919571   28654 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1009 19:13:59.919687   28654 api_server.go:141] control plane version: v1.31.1
	I1009 19:13:59.919708   28654 api_server.go:131] duration metric: took 5.063855ms to wait for apiserver health ...
	I1009 19:13:59.919716   28654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:14:00.094827   28654 request.go:632] Waited for 175.023163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.094896   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.094904   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.094915   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.094925   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.100594   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:14:00.107658   28654 system_pods.go:59] 24 kube-system pods found
	I1009 19:14:00.107684   28654 system_pods.go:61] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:14:00.107689   28654 system_pods.go:61] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:14:00.107692   28654 system_pods.go:61] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:14:00.107695   28654 system_pods.go:61] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:14:00.107699   28654 system_pods.go:61] "etcd-ha-199780-m03" [b174991f-8f2c-44b1-8646-5ee7533a9a67] Running
	I1009 19:14:00.107702   28654 system_pods.go:61] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:14:00.107706   28654 system_pods.go:61] "kindnet-b8ff2" [6f00c0c0-aae5-4bfa-aa9f-30f7c79fa343] Running
	I1009 19:14:00.107711   28654 system_pods.go:61] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:14:00.107716   28654 system_pods.go:61] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:14:00.107721   28654 system_pods.go:61] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:14:00.107725   28654 system_pods.go:61] "kube-apiserver-ha-199780-m03" [9d8e77cd-db60-4329-91ea-012315e5beaf] Running
	I1009 19:14:00.107733   28654 system_pods.go:61] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:14:00.107738   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:14:00.107747   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m03" [36743602-a58a-4171-88c2-9a79af012f26] Running
	I1009 19:14:00.107754   28654 system_pods.go:61] "kube-proxy-cltcd" [1dad9ac4-00d4-497e-8d9b-3e20a5c35c10] Running
	I1009 19:14:00.107758   28654 system_pods.go:61] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:14:00.107765   28654 system_pods.go:61] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:14:00.107770   28654 system_pods.go:61] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:14:00.107777   28654 system_pods.go:61] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:14:00.107783   28654 system_pods.go:61] "kube-scheduler-ha-199780-m03" [b687bb2e-b6eb-4c17-9762-537dd28919ff] Running
	I1009 19:14:00.107790   28654 system_pods.go:61] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:14:00.107795   28654 system_pods.go:61] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:14:00.107802   28654 system_pods.go:61] "kube-vip-ha-199780-m03" [6fb0f84c-1cc8-4539-a879-4a8fe8bc3eb6] Running
	I1009 19:14:00.107808   28654 system_pods.go:61] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:14:00.107818   28654 system_pods.go:74] duration metric: took 188.095908ms to wait for pod list to return data ...
	I1009 19:14:00.107830   28654 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:14:00.294248   28654 request.go:632] Waited for 186.335259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:14:00.294301   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:14:00.294308   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.294318   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.294323   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.298434   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:14:00.298601   28654 default_sa.go:45] found service account: "default"
	I1009 19:14:00.298618   28654 default_sa.go:55] duration metric: took 190.779244ms for default service account to be created ...
	I1009 19:14:00.298632   28654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:14:00.493990   28654 request.go:632] Waited for 195.280768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.494052   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.494059   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.494069   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.494077   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.499571   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:14:00.506443   28654 system_pods.go:86] 24 kube-system pods found
	I1009 19:14:00.506469   28654 system_pods.go:89] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:14:00.506474   28654 system_pods.go:89] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:14:00.506478   28654 system_pods.go:89] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:14:00.506482   28654 system_pods.go:89] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:14:00.506486   28654 system_pods.go:89] "etcd-ha-199780-m03" [b174991f-8f2c-44b1-8646-5ee7533a9a67] Running
	I1009 19:14:00.506490   28654 system_pods.go:89] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:14:00.506495   28654 system_pods.go:89] "kindnet-b8ff2" [6f00c0c0-aae5-4bfa-aa9f-30f7c79fa343] Running
	I1009 19:14:00.506503   28654 system_pods.go:89] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:14:00.506511   28654 system_pods.go:89] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:14:00.506518   28654 system_pods.go:89] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:14:00.506527   28654 system_pods.go:89] "kube-apiserver-ha-199780-m03" [9d8e77cd-db60-4329-91ea-012315e5beaf] Running
	I1009 19:14:00.506539   28654 system_pods.go:89] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:14:00.506548   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:14:00.506555   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m03" [36743602-a58a-4171-88c2-9a79af012f26] Running
	I1009 19:14:00.506558   28654 system_pods.go:89] "kube-proxy-cltcd" [1dad9ac4-00d4-497e-8d9b-3e20a5c35c10] Running
	I1009 19:14:00.506564   28654 system_pods.go:89] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:14:00.506569   28654 system_pods.go:89] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:14:00.506574   28654 system_pods.go:89] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:14:00.506580   28654 system_pods.go:89] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:14:00.506585   28654 system_pods.go:89] "kube-scheduler-ha-199780-m03" [b687bb2e-b6eb-4c17-9762-537dd28919ff] Running
	I1009 19:14:00.506590   28654 system_pods.go:89] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:14:00.506598   28654 system_pods.go:89] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:14:00.506602   28654 system_pods.go:89] "kube-vip-ha-199780-m03" [6fb0f84c-1cc8-4539-a879-4a8fe8bc3eb6] Running
	I1009 19:14:00.506610   28654 system_pods.go:89] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:14:00.506619   28654 system_pods.go:126] duration metric: took 207.977758ms to wait for k8s-apps to be running ...
	I1009 19:14:00.506632   28654 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:14:00.506681   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:14:00.521903   28654 system_svc.go:56] duration metric: took 15.266021ms WaitForService to wait for kubelet
	I1009 19:14:00.521926   28654 kubeadm.go:582] duration metric: took 23.693227633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:14:00.521941   28654 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:14:00.694326   28654 request.go:632] Waited for 172.306887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I1009 19:14:00.694392   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I1009 19:14:00.694398   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.694405   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.694409   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.698331   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:14:00.699548   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699566   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699577   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699581   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699584   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699587   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699591   28654 node_conditions.go:105] duration metric: took 177.645761ms to run NodePressure ...
	I1009 19:14:00.699601   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:14:00.699621   28654 start.go:255] writing updated cluster config ...
	I1009 19:14:00.699890   28654 ssh_runner.go:195] Run: rm -f paused
	I1009 19:14:00.750344   28654 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 19:14:00.752481   28654 out.go:177] * Done! kubectl is now configured to use "ha-199780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.218077036Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501465218055932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=099aac18-2758-4881-b9f7-7a83c6723413 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.218663586Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e5819cd-c53b-4379-8894-6d60f8d90706 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.218720879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e5819cd-c53b-4379-8894-6d60f8d90706 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.218955144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e5819cd-c53b-4379-8894-6d60f8d90706 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.264608161Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a9706fe-0adc-485c-b2cf-edc00afc894b name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.264810405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a9706fe-0adc-485c-b2cf-edc00afc894b name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.266680863Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f56b79f-cd57-41e4-a44b-97b52f85f021 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.267110016Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501465267088764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f56b79f-cd57-41e4-a44b-97b52f85f021 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.267667727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f7068c7f-ff25-47f1-8396-e707d398feb0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.267718891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f7068c7f-ff25-47f1-8396-e707d398feb0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.268232194Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f7068c7f-ff25-47f1-8396-e707d398feb0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.307148565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6311726-7155-40d2-858b-86805998088b name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.307224100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6311726-7155-40d2-858b-86805998088b name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.308134251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44fb9332-bda3-4156-9184-a961b916a903 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.308654166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501465308632508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44fb9332-bda3-4156-9184-a961b916a903 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.309133271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=49aa009a-c0fc-4c10-b5d3-e01830a477e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.309204151Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=49aa009a-c0fc-4c10-b5d3-e01830a477e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.309488367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=49aa009a-c0fc-4c10-b5d3-e01830a477e5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.352832235Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dcd01856-1fc1-430b-a637-a761eb67e875 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.352929683Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dcd01856-1fc1-430b-a637-a761eb67e875 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.353929888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02d3bfc4-9d94-4a37-9ff3-afe4a6c8bf6a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.354450704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501465354382761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02d3bfc4-9d94-4a37-9ff3-afe4a6c8bf6a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.354914855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbbdefcc-f71b-4ee1-b17d-e321c77b9d7a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.354970557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbbdefcc-f71b-4ee1-b17d-e321c77b9d7a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:45 ha-199780 crio[667]: time="2024-10-09 19:17:45.355247024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbbdefcc-f71b-4ee1-b17d-e321c77b9d7a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ea2f43f1a79f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4ee23da4cac60       busybox-7dff88458-9j59h
	22a50af75d092       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   085e585069bd9       coredns-7c65d6cfc9-r8lg7
	35a77197ba833       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   31a68dbf07563       coredns-7c65d6cfc9-v5k75
	ec6c52f12ef1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   fe10d9898f15c       storage-provisioner
	aa6f941b511ee       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   574f1065ffc92       kindnet-2gjpk
	e72e7a03ebf12       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   893da030028ba       kube-proxy-n8ffq
	5e66ef287f9b9       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   f43a5a99f755d       kube-vip-ha-199780
	297d9ba8730bd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   1c04b2a2ff60e       kube-apiserver-ha-199780
	88b0c31651177       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   7304e21bfd538       kube-controller-manager-ha-199780
	ce5525ec371c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   a31ef18f5a475       etcd-ha-199780
	02b6fe12544b4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4e472f9c0008c       kube-scheduler-ha-199780
	
	
	==> coredns [22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431] <==
	[INFO] 10.244.2.2:60800 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001355455s
	[INFO] 10.244.2.2:51592 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001524757s
	[INFO] 10.244.0.4:56643 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000117626s
	[INFO] 10.244.0.4:59083 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001918015s
	[INFO] 10.244.1.2:50050 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020734s
	[INFO] 10.244.1.2:42588 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154546s
	[INFO] 10.244.2.2:53843 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710102s
	[INFO] 10.244.2.2:41845 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146416s
	[INFO] 10.244.2.2:36609 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000234089s
	[INFO] 10.244.0.4:46267 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770158s
	[INFO] 10.244.0.4:50439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087554s
	[INFO] 10.244.0.4:34970 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127814s
	[INFO] 10.244.0.4:56896 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001173975s
	[INFO] 10.244.0.4:49966 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151676s
	[INFO] 10.244.1.2:42996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014083s
	[INFO] 10.244.1.2:44506 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088434s
	[INFO] 10.244.1.2:49086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070298s
	[INFO] 10.244.2.2:50808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197102s
	[INFO] 10.244.0.4:46671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019106s
	[INFO] 10.244.0.4:55369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070793s
	[INFO] 10.244.1.2:55579 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00053279s
	[INFO] 10.244.1.2:48281 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017096s
	[INFO] 10.244.2.2:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179419s
	[INFO] 10.244.2.2:37087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001697s
	[INFO] 10.244.0.4:45764 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105979s
	
	
	==> coredns [35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72] <==
	[INFO] 10.244.1.2:49567 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017247s
	[INFO] 10.244.1.2:46716 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012636722s
	[INFO] 10.244.1.2:55598 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179363s
	[INFO] 10.244.1.2:47319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137976s
	[INFO] 10.244.2.2:41489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184478s
	[INFO] 10.244.2.2:55951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000222614s
	[INFO] 10.244.2.2:48627 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015294s
	[INFO] 10.244.2.2:39644 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0012309s
	[INFO] 10.244.2.2:40477 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089525s
	[INFO] 10.244.0.4:43949 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131355s
	[INFO] 10.244.0.4:36372 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136676s
	[INFO] 10.244.0.4:46637 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067852s
	[INFO] 10.244.1.2:51170 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178464s
	[INFO] 10.244.2.2:34724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178092s
	[INFO] 10.244.2.2:51704 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113596s
	[INFO] 10.244.2.2:58856 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114468s
	[INFO] 10.244.0.4:46411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103548s
	[INFO] 10.244.0.4:56515 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097616s
	[INFO] 10.244.1.2:46439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144476s
	[INFO] 10.244.1.2:55946 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169556s
	[INFO] 10.244.2.2:59005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136307s
	[INFO] 10.244.2.2:36778 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074325s
	[INFO] 10.244.0.4:35520 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000216466s
	[INFO] 10.244.0.4:37146 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092067s
	[INFO] 10.244.0.4:38648 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006473s
	
	
	==> describe nodes <==
	Name:               ha-199780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T19_11_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:11:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    ha-199780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8b350a04d4e4876ae4d16443fff45f4
	  System UUID:                f8b350a0-4d4e-4876-ae4d-16443fff45f4
	  Boot ID:                    933ad8fe-c793-4abe-b675-8fc9d8bb0df7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9j59h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-r8lg7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m18s
	  kube-system                 coredns-7c65d6cfc9-v5k75             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m18s
	  kube-system                 etcd-ha-199780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m22s
	  kube-system                 kindnet-2gjpk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m18s
	  kube-system                 kube-apiserver-ha-199780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-controller-manager-ha-199780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-proxy-n8ffq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	  kube-system                 kube-scheduler-ha-199780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-199780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m16s  kube-proxy       
	  Normal  Starting                 6m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m22s  kubelet          Node ha-199780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s  kubelet          Node ha-199780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s  kubelet          Node ha-199780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m19s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	  Normal  NodeReady                6m     kubelet          Node ha-199780 status is now: NodeReady
	  Normal  RegisteredNode           5m20s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	  Normal  RegisteredNode           4m3s   node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	
	
	Name:               ha-199780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_12_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:12:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:15:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    ha-199780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d9c79bf2f124101a095ed4ba0ce88eb
	  System UUID:                8d9c79bf-2f12-4101-a095-ed4ba0ce88eb
	  Boot ID:                    5dd46771-2617-4b89-b6af-8b5fb9f8968b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6v84n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-199780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-pwr8x                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-199780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-controller-manager-ha-199780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-proxy-zfsq8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-199780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-vip-ha-199780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m24s                  kube-proxy       
	  Normal  Starting                 5m28s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-199780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-199780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-199780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  NodeNotReady             113s                   node-controller  Node ha-199780-m02 status is now: NodeNotReady
	
	
	Name:               ha-199780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_13_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-199780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eebc1909fc264048999cb603a9af6ce3
	  System UUID:                eebc1909-fc26-4048-999c-b603a9af6ce3
	  Boot ID:                    b15e1b77-82c5-4af5-a3d4-20b2860c5033
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8946j                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-199780-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m10s
	  kube-system                 kindnet-b8ff2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m12s
	  kube-system                 kube-apiserver-ha-199780-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-controller-manager-ha-199780-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-cltcd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-scheduler-ha-199780-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 kube-vip-ha-199780-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node ha-199780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node ha-199780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node ha-199780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	  Normal  RegisteredNode           4m3s                   node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	
	
	Name:               ha-199780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_14_39_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.124
	  Hostname:    ha-199780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 781e482090944bd998625225909c9e80
	  System UUID:                781e4820-9094-4bd9-9862-5225909c9e80
	  Boot ID:                    12a0f26b-3a10-4a3c-a52b-9cbc57a77f21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24ftv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m6s
	  kube-system                 kube-proxy-m4z2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m7s)  kubelet          Node ha-199780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m7s)  kubelet          Node ha-199780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m7s)  kubelet          Node ha-199780-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  RegisteredNode           3m3s                 node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  NodeReady                2m46s                kubelet          Node ha-199780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050153] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040118] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.479681] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588103] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 9 19:11] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.067225] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062889] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.160511] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.147234] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.288221] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.950259] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +4.382176] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.060168] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.347615] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.082493] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.436773] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.719462] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 9 19:12] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef] <==
	{"level":"warn","ts":"2024-10-09T19:17:45.623337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.649965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.661401Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.664870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.680610Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.687572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.693744Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.698248Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.701798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.708055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.715162Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.722010Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.724334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.725497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.728743Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.736107Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.742601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.747135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.753226Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.759121Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.762925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.767560Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.778577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.785935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:45.824302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:17:45 up 6 min,  0 users,  load average: 0.21, 0.33, 0.18
	Linux ha-199780 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff] <==
	I1009 19:17:15.107515       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:25.107513       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:25.107568       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:25.107889       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:25.107926       1 main.go:300] handling current node
	I1009 19:17:25.107945       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:25.107952       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:25.108091       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:25.108116       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:35.098534       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:35.098583       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:35.098861       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:35.098893       1 main.go:300] handling current node
	I1009 19:17:35.098905       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:35.098910       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:35.099056       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:35.099076       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:45.106531       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:45.106579       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:45.106833       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:45.106857       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:45.106999       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:45.107020       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:45.107136       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:45.107162       1 main.go:300] handling current node
	
	
	==> kube-apiserver [297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d] <==
	I1009 19:11:21.668889       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:11:21.770460       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:11:21.781866       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.114]
	I1009 19:11:21.782961       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 19:11:21.787948       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:11:22.068030       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 19:11:22.927751       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 19:11:22.944470       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:11:23.089040       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 19:11:27.267149       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1009 19:11:27.777277       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1009 19:14:07.172312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48556: use of closed network connection
	E1009 19:14:07.353387       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48566: use of closed network connection
	E1009 19:14:07.545234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48574: use of closed network connection
	E1009 19:14:07.734543       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48582: use of closed network connection
	E1009 19:14:07.929888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48590: use of closed network connection
	E1009 19:14:08.100628       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48610: use of closed network connection
	E1009 19:14:08.280738       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48618: use of closed network connection
	E1009 19:14:08.453709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48636: use of closed network connection
	E1009 19:14:08.625372       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48648: use of closed network connection
	E1009 19:14:08.913070       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48688: use of closed network connection
	E1009 19:14:09.077842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48702: use of closed network connection
	E1009 19:14:09.252280       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48730: use of closed network connection
	E1009 19:14:09.427983       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48748: use of closed network connection
	E1009 19:14:09.597172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48774: use of closed network connection
	
	
	==> kube-controller-manager [88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf] <==
	I1009 19:14:39.219907       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-199780-m04" podCIDRs=["10.244.3.0/24"]
	I1009 19:14:39.220731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.221061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.241490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.355995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.770947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:40.508613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:42.009348       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:42.009820       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-199780-m04"
	I1009 19:14:42.092487       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:43.021323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:43.490581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:49.589213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:59.213778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:59.213909       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-199780-m04"
	I1009 19:14:59.228331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:00.446970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:10.142919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:52.044073       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-199780-m04"
	I1009 19:15:52.044690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:52.073336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:52.197476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.479755ms"
	I1009 19:15:52.197580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.944µs"
	I1009 19:15:53.092490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:57.298894       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	
	
	==> kube-proxy [e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 19:11:28.707293       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 19:11:28.725677       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	E1009 19:11:28.725782       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:11:28.757070       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 19:11:28.757115       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:11:28.757143       1 server_linux.go:169] "Using iptables Proxier"
	I1009 19:11:28.759907       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:11:28.760502       1 server.go:483] "Version info" version="v1.31.1"
	I1009 19:11:28.760531       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:11:28.763071       1 config.go:199] "Starting service config controller"
	I1009 19:11:28.763270       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 19:11:28.763554       1 config.go:105] "Starting endpoint slice config controller"
	I1009 19:11:28.763583       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 19:11:28.764395       1 config.go:328] "Starting node config controller"
	I1009 19:11:28.764485       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 19:11:28.864003       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 19:11:28.864032       1 shared_informer.go:320] Caches are synced for service config
	I1009 19:11:28.864635       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f] <==
	W1009 19:11:21.020523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.020653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.034179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.034272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.151254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 19:11:21.151392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.213273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 19:11:21.213327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.215782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:11:21.217186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.224009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 19:11:21.224287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.233925       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 19:11:21.234510       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 19:11:21.254121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.254998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1009 19:11:24.360718       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 19:14:39.271772       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-v6wc7\": pod kube-proxy-v6wc7 is already assigned to node \"ha-199780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-v6wc7" node="ha-199780-m04"
	E1009 19:14:39.274796       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d0c6f382-7a34-4281-922e-ded9d878bec1(kube-system/kube-proxy-v6wc7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-v6wc7"
	E1009 19:14:39.274892       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v6wc7\": pod kube-proxy-v6wc7 is already assigned to node \"ha-199780-m04\"" pod="kube-system/kube-proxy-v6wc7"
	I1009 19:14:39.274974       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v6wc7" node="ha-199780-m04"
	E1009 19:14:39.274639       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24ftv\": pod kindnet-24ftv is already assigned to node \"ha-199780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-24ftv" node="ha-199780-m04"
	E1009 19:14:39.277781       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 67dc91f7-39c8-4a82-843c-629f28c633ce(kube-system/kindnet-24ftv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24ftv"
	E1009 19:14:39.277909       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24ftv\": pod kindnet-24ftv is already assigned to node \"ha-199780-m04\"" pod="kube-system/kindnet-24ftv"
	I1009 19:14:39.278018       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24ftv" node="ha-199780-m04"
	
	
	==> kubelet <==
	Oct 09 19:16:23 ha-199780 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:16:23 ha-199780 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:16:23 ha-199780 kubelet[1323]: E1009 19:16:23.169875    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501383169575669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:23 ha-199780 kubelet[1323]: E1009 19:16:23.169902    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501383169575669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:33 ha-199780 kubelet[1323]: E1009 19:16:33.171614    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501393171084690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:33 ha-199780 kubelet[1323]: E1009 19:16:33.171869    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501393171084690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:43 ha-199780 kubelet[1323]: E1009 19:16:43.174108    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501403173783019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:43 ha-199780 kubelet[1323]: E1009 19:16:43.174391    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501403173783019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:53 ha-199780 kubelet[1323]: E1009 19:16:53.177556    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501413177111466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:53 ha-199780 kubelet[1323]: E1009 19:16:53.177590    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501413177111466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:03 ha-199780 kubelet[1323]: E1009 19:17:03.179697    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501423179388594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:03 ha-199780 kubelet[1323]: E1009 19:17:03.179743    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501423179388594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:13 ha-199780 kubelet[1323]: E1009 19:17:13.181290    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501433180839998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:13 ha-199780 kubelet[1323]: E1009 19:17:13.181685    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501433180839998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.046503    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:17:23 ha-199780 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.183478    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501443183171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.183519    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501443183171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:33 ha-199780 kubelet[1323]: E1009 19:17:33.185325    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501453184930973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:33 ha-199780 kubelet[1323]: E1009 19:17:33.186043    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501453184930973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:43 ha-199780 kubelet[1323]: E1009 19:17:43.188281    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501463187979357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:43 ha-199780 kubelet[1323]: E1009 19:17:43.188327    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501463187979357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-199780 -n ha-199780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-199780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr: (3.975156517s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-199780 -n ha-199780
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 logs -n 25: (1.456970798s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m03_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m04 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp testdata/cp-test.txt                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m04_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03:/home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m03 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-199780 node stop m02 -v=7                                                     | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-199780 node start m02 -v=7                                                    | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:10:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:10:42.430511   28654 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:10:42.430648   28654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:42.430657   28654 out.go:358] Setting ErrFile to fd 2...
	I1009 19:10:42.430662   28654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:42.430823   28654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:10:42.431377   28654 out.go:352] Setting JSON to false
	I1009 19:10:42.432258   28654 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3183,"bootTime":1728497859,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:10:42.432357   28654 start.go:139] virtualization: kvm guest
	I1009 19:10:42.434444   28654 out.go:177] * [ha-199780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:10:42.435720   28654 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:10:42.435744   28654 notify.go:220] Checking for updates...
	I1009 19:10:42.438470   28654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:10:42.439771   28654 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:10:42.441201   28654 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:42.442550   28654 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:10:42.443839   28654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:10:42.445321   28654 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:10:42.478513   28654 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 19:10:42.479828   28654 start.go:297] selected driver: kvm2
	I1009 19:10:42.479841   28654 start.go:901] validating driver "kvm2" against <nil>
	I1009 19:10:42.479851   28654 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:10:42.480537   28654 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:42.480609   28654 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:10:42.494762   28654 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 19:10:42.494798   28654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 19:10:42.495015   28654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:10:42.495042   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:10:42.495103   28654 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:10:42.495115   28654 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:10:42.495160   28654 start.go:340] cluster config:
	{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:42.495268   28654 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:42.497127   28654 out.go:177] * Starting "ha-199780" primary control-plane node in "ha-199780" cluster
	I1009 19:10:42.498350   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:10:42.498375   28654 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:10:42.498383   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:10:42.498461   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:10:42.498474   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:10:42.498736   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:10:42.498755   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json: {Name:mkaa9f981fdc58b4cf67de89e14727a24139b9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:10:42.498888   28654 start.go:360] acquireMachinesLock for ha-199780: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:10:42.498923   28654 start.go:364] duration metric: took 18.652µs to acquireMachinesLock for "ha-199780"
	I1009 19:10:42.498944   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:10:42.499008   28654 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 19:10:42.500613   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:10:42.500730   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:42.500770   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:42.514603   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I1009 19:10:42.515116   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:42.515617   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:10:42.515660   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:42.515950   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:42.516152   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:10:42.516283   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:10:42.516418   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:10:42.516447   28654 client.go:168] LocalClient.Create starting
	I1009 19:10:42.516482   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:10:42.516515   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:10:42.516531   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:10:42.516577   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:10:42.516599   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:10:42.516612   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:10:42.516640   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:10:42.516651   28654 main.go:141] libmachine: (ha-199780) Calling .PreCreateCheck
	I1009 19:10:42.516980   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:10:42.517335   28654 main.go:141] libmachine: Creating machine...
	I1009 19:10:42.517347   28654 main.go:141] libmachine: (ha-199780) Calling .Create
	I1009 19:10:42.517467   28654 main.go:141] libmachine: (ha-199780) Creating KVM machine...
	I1009 19:10:42.518611   28654 main.go:141] libmachine: (ha-199780) DBG | found existing default KVM network
	I1009 19:10:42.519307   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.519165   28677 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1009 19:10:42.519338   28654 main.go:141] libmachine: (ha-199780) DBG | created network xml: 
	I1009 19:10:42.519353   28654 main.go:141] libmachine: (ha-199780) DBG | <network>
	I1009 19:10:42.519365   28654 main.go:141] libmachine: (ha-199780) DBG |   <name>mk-ha-199780</name>
	I1009 19:10:42.519373   28654 main.go:141] libmachine: (ha-199780) DBG |   <dns enable='no'/>
	I1009 19:10:42.519380   28654 main.go:141] libmachine: (ha-199780) DBG |   
	I1009 19:10:42.519389   28654 main.go:141] libmachine: (ha-199780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 19:10:42.519398   28654 main.go:141] libmachine: (ha-199780) DBG |     <dhcp>
	I1009 19:10:42.519408   28654 main.go:141] libmachine: (ha-199780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 19:10:42.519416   28654 main.go:141] libmachine: (ha-199780) DBG |     </dhcp>
	I1009 19:10:42.519425   28654 main.go:141] libmachine: (ha-199780) DBG |   </ip>
	I1009 19:10:42.519432   28654 main.go:141] libmachine: (ha-199780) DBG |   
	I1009 19:10:42.519439   28654 main.go:141] libmachine: (ha-199780) DBG | </network>
	I1009 19:10:42.519448   28654 main.go:141] libmachine: (ha-199780) DBG | 
	I1009 19:10:42.523998   28654 main.go:141] libmachine: (ha-199780) DBG | trying to create private KVM network mk-ha-199780 192.168.39.0/24...
	I1009 19:10:42.584957   28654 main.go:141] libmachine: (ha-199780) DBG | private KVM network mk-ha-199780 192.168.39.0/24 created
	I1009 19:10:42.584984   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.584941   28677 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:42.584995   28654 main.go:141] libmachine: (ha-199780) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 ...
	I1009 19:10:42.585010   28654 main.go:141] libmachine: (ha-199780) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:10:42.585155   28654 main.go:141] libmachine: (ha-199780) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:10:42.845983   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.845854   28677 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa...
	I1009 19:10:43.100187   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:43.100062   28677 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/ha-199780.rawdisk...
	I1009 19:10:43.100216   28654 main.go:141] libmachine: (ha-199780) DBG | Writing magic tar header
	I1009 19:10:43.100229   28654 main.go:141] libmachine: (ha-199780) DBG | Writing SSH key tar header
	I1009 19:10:43.100242   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:43.100204   28677 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 ...
	I1009 19:10:43.100332   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780
	I1009 19:10:43.100355   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 (perms=drwx------)
	I1009 19:10:43.100365   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:10:43.100376   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:10:43.100386   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:43.100399   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:10:43.100406   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:10:43.100424   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:10:43.100435   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home
	I1009 19:10:43.100443   28654 main.go:141] libmachine: (ha-199780) DBG | Skipping /home - not owner
	I1009 19:10:43.100455   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:10:43.100467   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:10:43.100476   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:10:43.100483   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:10:43.100487   28654 main.go:141] libmachine: (ha-199780) Creating domain...
	I1009 19:10:43.101601   28654 main.go:141] libmachine: (ha-199780) define libvirt domain using xml: 
	I1009 19:10:43.101609   28654 main.go:141] libmachine: (ha-199780) <domain type='kvm'>
	I1009 19:10:43.101614   28654 main.go:141] libmachine: (ha-199780)   <name>ha-199780</name>
	I1009 19:10:43.101624   28654 main.go:141] libmachine: (ha-199780)   <memory unit='MiB'>2200</memory>
	I1009 19:10:43.101632   28654 main.go:141] libmachine: (ha-199780)   <vcpu>2</vcpu>
	I1009 19:10:43.101638   28654 main.go:141] libmachine: (ha-199780)   <features>
	I1009 19:10:43.101646   28654 main.go:141] libmachine: (ha-199780)     <acpi/>
	I1009 19:10:43.101656   28654 main.go:141] libmachine: (ha-199780)     <apic/>
	I1009 19:10:43.101664   28654 main.go:141] libmachine: (ha-199780)     <pae/>
	I1009 19:10:43.101673   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.101686   28654 main.go:141] libmachine: (ha-199780)   </features>
	I1009 19:10:43.101695   28654 main.go:141] libmachine: (ha-199780)   <cpu mode='host-passthrough'>
	I1009 19:10:43.101702   28654 main.go:141] libmachine: (ha-199780)   
	I1009 19:10:43.101711   28654 main.go:141] libmachine: (ha-199780)   </cpu>
	I1009 19:10:43.101752   28654 main.go:141] libmachine: (ha-199780)   <os>
	I1009 19:10:43.101769   28654 main.go:141] libmachine: (ha-199780)     <type>hvm</type>
	I1009 19:10:43.101776   28654 main.go:141] libmachine: (ha-199780)     <boot dev='cdrom'/>
	I1009 19:10:43.101783   28654 main.go:141] libmachine: (ha-199780)     <boot dev='hd'/>
	I1009 19:10:43.101819   28654 main.go:141] libmachine: (ha-199780)     <bootmenu enable='no'/>
	I1009 19:10:43.101840   28654 main.go:141] libmachine: (ha-199780)   </os>
	I1009 19:10:43.101848   28654 main.go:141] libmachine: (ha-199780)   <devices>
	I1009 19:10:43.101855   28654 main.go:141] libmachine: (ha-199780)     <disk type='file' device='cdrom'>
	I1009 19:10:43.101864   28654 main.go:141] libmachine: (ha-199780)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/boot2docker.iso'/>
	I1009 19:10:43.101869   28654 main.go:141] libmachine: (ha-199780)       <target dev='hdc' bus='scsi'/>
	I1009 19:10:43.101877   28654 main.go:141] libmachine: (ha-199780)       <readonly/>
	I1009 19:10:43.101881   28654 main.go:141] libmachine: (ha-199780)     </disk>
	I1009 19:10:43.101887   28654 main.go:141] libmachine: (ha-199780)     <disk type='file' device='disk'>
	I1009 19:10:43.101894   28654 main.go:141] libmachine: (ha-199780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:10:43.101901   28654 main.go:141] libmachine: (ha-199780)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/ha-199780.rawdisk'/>
	I1009 19:10:43.101908   28654 main.go:141] libmachine: (ha-199780)       <target dev='hda' bus='virtio'/>
	I1009 19:10:43.101913   28654 main.go:141] libmachine: (ha-199780)     </disk>
	I1009 19:10:43.101919   28654 main.go:141] libmachine: (ha-199780)     <interface type='network'>
	I1009 19:10:43.101933   28654 main.go:141] libmachine: (ha-199780)       <source network='mk-ha-199780'/>
	I1009 19:10:43.101946   28654 main.go:141] libmachine: (ha-199780)       <model type='virtio'/>
	I1009 19:10:43.101959   28654 main.go:141] libmachine: (ha-199780)     </interface>
	I1009 19:10:43.101969   28654 main.go:141] libmachine: (ha-199780)     <interface type='network'>
	I1009 19:10:43.101978   28654 main.go:141] libmachine: (ha-199780)       <source network='default'/>
	I1009 19:10:43.101987   28654 main.go:141] libmachine: (ha-199780)       <model type='virtio'/>
	I1009 19:10:43.101995   28654 main.go:141] libmachine: (ha-199780)     </interface>
	I1009 19:10:43.102004   28654 main.go:141] libmachine: (ha-199780)     <serial type='pty'>
	I1009 19:10:43.102012   28654 main.go:141] libmachine: (ha-199780)       <target port='0'/>
	I1009 19:10:43.102025   28654 main.go:141] libmachine: (ha-199780)     </serial>
	I1009 19:10:43.102042   28654 main.go:141] libmachine: (ha-199780)     <console type='pty'>
	I1009 19:10:43.102058   28654 main.go:141] libmachine: (ha-199780)       <target type='serial' port='0'/>
	I1009 19:10:43.102072   28654 main.go:141] libmachine: (ha-199780)     </console>
	I1009 19:10:43.102081   28654 main.go:141] libmachine: (ha-199780)     <rng model='virtio'>
	I1009 19:10:43.102095   28654 main.go:141] libmachine: (ha-199780)       <backend model='random'>/dev/random</backend>
	I1009 19:10:43.102102   28654 main.go:141] libmachine: (ha-199780)     </rng>
	I1009 19:10:43.102106   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.102114   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.102124   28654 main.go:141] libmachine: (ha-199780)   </devices>
	I1009 19:10:43.102131   28654 main.go:141] libmachine: (ha-199780) </domain>
	I1009 19:10:43.102144   28654 main.go:141] libmachine: (ha-199780) 
	I1009 19:10:43.106174   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:62:13:83 in network default
	I1009 19:10:43.106715   28654 main.go:141] libmachine: (ha-199780) Ensuring networks are active...
	I1009 19:10:43.106743   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:43.107417   28654 main.go:141] libmachine: (ha-199780) Ensuring network default is active
	I1009 19:10:43.107748   28654 main.go:141] libmachine: (ha-199780) Ensuring network mk-ha-199780 is active
	I1009 19:10:43.108262   28654 main.go:141] libmachine: (ha-199780) Getting domain xml...
	I1009 19:10:43.109003   28654 main.go:141] libmachine: (ha-199780) Creating domain...
	I1009 19:10:44.275323   28654 main.go:141] libmachine: (ha-199780) Waiting to get IP...
	I1009 19:10:44.276021   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.276397   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.276440   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.276393   28677 retry.go:31] will retry after 234.976528ms: waiting for machine to come up
	I1009 19:10:44.512805   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.513239   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.513266   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.513207   28677 retry.go:31] will retry after 293.441421ms: waiting for machine to come up
	I1009 19:10:44.808637   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.809099   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.809119   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.809062   28677 retry.go:31] will retry after 303.641198ms: waiting for machine to come up
	I1009 19:10:45.114382   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:45.114813   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:45.114842   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:45.114772   28677 retry.go:31] will retry after 536.014176ms: waiting for machine to come up
	I1009 19:10:45.652428   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:45.652792   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:45.652818   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:45.652745   28677 retry.go:31] will retry after 705.110787ms: waiting for machine to come up
	I1009 19:10:46.359497   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:46.360044   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:46.360101   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:46.360017   28677 retry.go:31] will retry after 647.020654ms: waiting for machine to come up
	I1009 19:10:47.008863   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:47.009323   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:47.009364   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:47.009282   28677 retry.go:31] will retry after 1.0294982s: waiting for machine to come up
	I1009 19:10:48.039832   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:48.040304   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:48.040326   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:48.040267   28677 retry.go:31] will retry after 1.106767931s: waiting for machine to come up
	I1009 19:10:49.148646   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:49.149054   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:49.149076   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:49.149026   28677 retry.go:31] will retry after 1.376949133s: waiting for machine to come up
	I1009 19:10:50.527437   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:50.527855   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:50.527877   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:50.527806   28677 retry.go:31] will retry after 1.480550438s: waiting for machine to come up
	I1009 19:10:52.009673   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:52.010195   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:52.010224   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:52.010161   28677 retry.go:31] will retry after 2.407652517s: waiting for machine to come up
	I1009 19:10:54.420236   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:54.420627   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:54.420661   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:54.420596   28677 retry.go:31] will retry after 3.410708317s: waiting for machine to come up
	I1009 19:10:57.833396   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:57.833828   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:57.833855   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:57.833781   28677 retry.go:31] will retry after 3.08007179s: waiting for machine to come up
	I1009 19:11:00.918052   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:00.918375   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:11:00.918394   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:11:00.918349   28677 retry.go:31] will retry after 3.66383863s: waiting for machine to come up
	I1009 19:11:04.584755   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.585113   28654 main.go:141] libmachine: (ha-199780) Found IP for machine: 192.168.39.114
	I1009 19:11:04.585143   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has current primary IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.585150   28654 main.go:141] libmachine: (ha-199780) Reserving static IP address...
	I1009 19:11:04.585468   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find host DHCP lease matching {name: "ha-199780", mac: "52:54:00:5a:16:82", ip: "192.168.39.114"} in network mk-ha-199780
	I1009 19:11:04.653177   28654 main.go:141] libmachine: (ha-199780) DBG | Getting to WaitForSSH function...
	I1009 19:11:04.653210   28654 main.go:141] libmachine: (ha-199780) Reserved static IP address: 192.168.39.114
	I1009 19:11:04.653224   28654 main.go:141] libmachine: (ha-199780) Waiting for SSH to be available...
	I1009 19:11:04.655641   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.655950   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.655974   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.656128   28654 main.go:141] libmachine: (ha-199780) DBG | Using SSH client type: external
	I1009 19:11:04.656155   28654 main.go:141] libmachine: (ha-199780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa (-rw-------)
	I1009 19:11:04.656182   28654 main.go:141] libmachine: (ha-199780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:11:04.656192   28654 main.go:141] libmachine: (ha-199780) DBG | About to run SSH command:
	I1009 19:11:04.656207   28654 main.go:141] libmachine: (ha-199780) DBG | exit 0
	I1009 19:11:04.778875   28654 main.go:141] libmachine: (ha-199780) DBG | SSH cmd err, output: <nil>: 
	I1009 19:11:04.779170   28654 main.go:141] libmachine: (ha-199780) KVM machine creation complete!
	I1009 19:11:04.779478   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:11:04.780010   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:04.780176   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:04.780315   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:11:04.780331   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:04.781523   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:11:04.781541   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:11:04.781546   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:11:04.781551   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.783979   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.784330   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.784354   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.784520   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.784676   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.784815   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.784920   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.785023   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.785198   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.785208   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:11:04.886621   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:04.886642   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:11:04.886652   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.889117   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.889470   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.889489   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.889658   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.889825   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.889979   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.890105   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.890280   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.890429   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.890439   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:11:04.991626   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:11:04.991752   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:11:04.991763   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:11:04.991772   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:04.991975   28654 buildroot.go:166] provisioning hostname "ha-199780"
	I1009 19:11:04.991994   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:04.992147   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.994446   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.994806   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.994831   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.994954   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.995140   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.995287   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.995424   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.995557   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.995745   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.995756   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780 && echo "ha-199780" | sudo tee /etc/hostname
	I1009 19:11:05.113349   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780
	
	I1009 19:11:05.113396   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.116625   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.117021   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.117049   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.117198   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.117349   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.117468   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.117570   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.117692   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.117857   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.117885   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:05.228123   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:05.228148   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:11:05.228172   28654 buildroot.go:174] setting up certificates
	I1009 19:11:05.228182   28654 provision.go:84] configureAuth start
	I1009 19:11:05.228189   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:05.228442   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.230797   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.231092   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.231117   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.231241   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.233255   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.233547   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.233569   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.233652   28654 provision.go:143] copyHostCerts
	I1009 19:11:05.233688   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:05.233736   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:11:05.233748   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:05.233826   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:11:05.233942   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:05.233970   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:11:05.233976   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:05.234005   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:11:05.234063   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:05.234084   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:11:05.234090   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:05.234111   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:11:05.234159   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780 san=[127.0.0.1 192.168.39.114 ha-199780 localhost minikube]
	I1009 19:11:05.299525   28654 provision.go:177] copyRemoteCerts
	I1009 19:11:05.299577   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:05.299597   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.301859   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.302122   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.302159   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.302298   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.302456   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.302593   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.302710   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.385328   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:11:05.385392   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:11:05.408377   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:11:05.408446   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:11:05.431231   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:11:05.431308   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:11:05.454941   28654 provision.go:87] duration metric: took 226.750506ms to configureAuth
	I1009 19:11:05.454965   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:11:05.455145   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:05.455206   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.457741   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.458006   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.458042   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.458216   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.458397   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.458525   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.458644   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.458788   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.458960   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.458976   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:05.676474   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:05.676512   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:11:05.676522   28654 main.go:141] libmachine: (ha-199780) Calling .GetURL
	I1009 19:11:05.677728   28654 main.go:141] libmachine: (ha-199780) DBG | Using libvirt version 6000000
	I1009 19:11:05.679755   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.680041   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.680069   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.680196   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:11:05.680210   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:11:05.680217   28654 client.go:171] duration metric: took 23.163762708s to LocalClient.Create
	I1009 19:11:05.680235   28654 start.go:167] duration metric: took 23.163818343s to libmachine.API.Create "ha-199780"
	I1009 19:11:05.680244   28654 start.go:293] postStartSetup for "ha-199780" (driver="kvm2")
	I1009 19:11:05.680255   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:05.680269   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.680459   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:05.680481   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.682388   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.682658   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.682683   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.682747   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.682909   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.683039   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.683197   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.767177   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:05.771701   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:11:05.771721   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:11:05.771790   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:11:05.771869   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:11:05.771881   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:11:05.771984   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:11:05.783287   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:05.808917   28654 start.go:296] duration metric: took 128.662808ms for postStartSetup
	I1009 19:11:05.808956   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:11:05.809504   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.812016   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.812350   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.812373   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.812566   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:05.812738   28654 start.go:128] duration metric: took 23.313722048s to createHost
	I1009 19:11:05.812762   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.814746   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.815037   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.815078   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.815176   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.815323   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.815479   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.815598   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.815737   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.815932   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.815953   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:11:05.919951   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501065.894358321
	
	I1009 19:11:05.919974   28654 fix.go:216] guest clock: 1728501065.894358321
	I1009 19:11:05.919982   28654 fix.go:229] Guest: 2024-10-09 19:11:05.894358321 +0000 UTC Remote: 2024-10-09 19:11:05.812750418 +0000 UTC m=+23.417944098 (delta=81.607903ms)
	I1009 19:11:05.920005   28654 fix.go:200] guest clock delta is within tolerance: 81.607903ms
	I1009 19:11:05.920012   28654 start.go:83] releasing machines lock for "ha-199780", held for 23.421078352s
	I1009 19:11:05.920035   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.920263   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.922615   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.922966   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.922995   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.923150   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923568   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923734   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923824   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:05.923862   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.924006   28654 ssh_runner.go:195] Run: cat /version.json
	I1009 19:11:05.924044   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.926446   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926648   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926765   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.926802   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926912   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.927037   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.927038   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.927086   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.927223   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.927272   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.927339   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.927433   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.927750   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.927897   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:06.024499   28654 ssh_runner.go:195] Run: systemctl --version
	I1009 19:11:06.030414   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:06.185061   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:06.191423   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:06.191490   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:06.206786   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:11:06.206805   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:11:06.206857   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:06.222401   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:06.235373   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:11:06.235433   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:06.247949   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:06.260686   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:06.376406   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:06.514646   28654 docker.go:233] disabling docker service ...
	I1009 19:11:06.514703   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:06.529298   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:06.542407   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:06.674904   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:06.805457   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:06.819076   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:06.839480   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:11:06.839538   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.851838   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:11:06.851893   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.864160   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.876368   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.889066   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:06.901093   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.912169   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.929058   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.939929   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:06.949542   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:11:06.949583   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:11:06.962939   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:06.972697   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:07.093662   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:07.192295   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:07.192352   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:07.197105   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:11:07.197162   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:11:07.200935   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:11:07.247609   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:11:07.247689   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:07.275380   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:07.304930   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:11:07.306083   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:07.308768   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:07.309094   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:07.309121   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:07.309303   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:07.313459   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:07.326691   28654 kubeadm.go:883] updating cluster {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:11:07.326798   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:11:07.326859   28654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:07.358942   28654 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 19:11:07.359000   28654 ssh_runner.go:195] Run: which lz4
	I1009 19:11:07.363007   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1009 19:11:07.363119   28654 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 19:11:07.367226   28654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 19:11:07.367262   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 19:11:08.682998   28654 crio.go:462] duration metric: took 1.319910565s to copy over tarball
	I1009 19:11:08.683082   28654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 19:11:10.661640   28654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978525541s)
	I1009 19:11:10.661674   28654 crio.go:469] duration metric: took 1.978647131s to extract the tarball
	I1009 19:11:10.661683   28654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 19:11:10.698452   28654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:10.744870   28654 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:11:10.744890   28654 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:11:10.744897   28654 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.31.1 crio true true} ...
	I1009 19:11:10.744976   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:10.745041   28654 ssh_runner.go:195] Run: crio config
	I1009 19:11:10.794773   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:11:10.794792   28654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:11:10.794807   28654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:11:10.794828   28654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-199780 NodeName:ha-199780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:11:10.794978   28654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-199780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:11:10.795005   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:11:10.795055   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:11:10.811512   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:11:10.811631   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:11:10.811693   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:10.821887   28654 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:11:10.821946   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:11:10.831583   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1009 19:11:10.848385   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:10.865617   28654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1009 19:11:10.882082   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1009 19:11:10.898198   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:10.902054   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:10.914494   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:11.043972   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:11.060509   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.114
	I1009 19:11:11.060533   28654 certs.go:194] generating shared ca certs ...
	I1009 19:11:11.060553   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.060728   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:11:11.060785   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:11:11.060798   28654 certs.go:256] generating profile certs ...
	I1009 19:11:11.060867   28654 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:11:11.060891   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt with IP's: []
	I1009 19:11:11.257901   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt ...
	I1009 19:11:11.257931   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt: {Name:mke6971132fee40da37bc72041e92dde05b5c360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.258111   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key ...
	I1009 19:11:11.258127   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key: {Name:mk2c48ceaf748f5efc5f062df1cf8bf8d38b626a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.258227   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621
	I1009 19:11:11.258246   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.254]
	I1009 19:11:11.502202   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 ...
	I1009 19:11:11.502241   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621: {Name:mk85bc5cf43d418e43d8be4b6611eb785caa9f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.502445   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621 ...
	I1009 19:11:11.502463   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621: {Name:mk1d94ea93b96fe750cd9f95170ab488ca016856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.502573   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:11:11.502721   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:11:11.502815   28654 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:11:11.502839   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt with IP's: []
	I1009 19:11:11.612443   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt ...
	I1009 19:11:11.612470   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt: {Name:mk212b018e6441944e189239707af3950678c689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.612646   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key ...
	I1009 19:11:11.612656   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key: {Name:mkb7f3d492b787f9b9b56d2b48939b9971f793ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.612724   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:11:11.612740   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:11:11.612751   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:11:11.612763   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:11:11.612774   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:11:11.612786   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:11:11.612798   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:11:11.612810   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:11:11.612864   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:11:11.612897   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:11.612903   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:11:11.612926   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:11:11.612951   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:11.612971   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:11:11.613006   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:11.613033   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.613046   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.613058   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:11:11.613596   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:11.638855   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:11:11.662787   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:11.686693   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:11:11.710429   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:11:11.734032   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:11.757651   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:11.781611   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:11.805128   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:11.831515   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:11:11.878516   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:11:11.903576   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:11:11.920589   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:11:11.926400   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:11.937651   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.942167   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.942223   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.947902   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:11.959013   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:11:11.970169   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.974738   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.974799   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.980430   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:11.991569   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:11:12.002421   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.006666   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.006711   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.012305   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:12.023435   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:12.027428   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:11:12.027474   28654 kubeadm.go:392] StartCluster: {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:12.027535   28654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:11:12.027572   28654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:11:12.068414   28654 cri.go:89] found id: ""
	I1009 19:11:12.068473   28654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:11:12.078653   28654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:11:12.088659   28654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:11:12.098391   28654 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:11:12.098408   28654 kubeadm.go:157] found existing configuration files:
	
	I1009 19:11:12.098445   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:11:12.107757   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:11:12.107807   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:11:12.117369   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:11:12.126789   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:11:12.126847   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:11:12.136637   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:11:12.146308   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:11:12.146364   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:11:12.156469   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:11:12.165834   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:11:12.165886   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:11:12.175515   28654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 19:11:12.280177   28654 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 19:11:12.280255   28654 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 19:11:12.386423   28654 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:11:12.386621   28654 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:11:12.386752   28654 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:11:12.404964   28654 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:11:12.482162   28654 out.go:235]   - Generating certificates and keys ...
	I1009 19:11:12.482262   28654 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 19:11:12.482346   28654 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 19:11:12.648552   28654 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:11:12.833455   28654 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:11:13.055850   28654 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:11:13.322371   28654 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 19:11:13.484433   28654 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 19:11:13.484631   28654 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-199780 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I1009 19:11:13.583799   28654 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 19:11:13.584031   28654 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-199780 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I1009 19:11:14.090538   28654 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:11:14.260812   28654 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:11:14.391262   28654 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 19:11:14.391369   28654 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:11:14.744340   28654 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:11:14.834478   28654 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:11:14.925339   28654 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:11:15.080024   28654 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:11:15.271189   28654 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:11:15.271810   28654 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:11:15.277194   28654 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:11:15.369554   28654 out.go:235]   - Booting up control plane ...
	I1009 19:11:15.369723   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:11:15.369842   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:11:15.369937   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:11:15.370057   28654 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:11:15.370148   28654 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:11:15.370183   28654 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 19:11:15.445224   28654 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:11:15.445341   28654 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:11:16.448580   28654 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005128821s
	I1009 19:11:16.448662   28654 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 19:11:22.061566   28654 kubeadm.go:310] [api-check] The API server is healthy after 5.61687232s
	I1009 19:11:22.078904   28654 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:11:22.108560   28654 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:11:22.646139   28654 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:11:22.646344   28654 kubeadm.go:310] [mark-control-plane] Marking the node ha-199780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:11:22.657702   28654 kubeadm.go:310] [bootstrap-token] Using token: n3skeb.bws3ifw22cumajmm
	I1009 19:11:22.659119   28654 out.go:235]   - Configuring RBAC rules ...
	I1009 19:11:22.659267   28654 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:11:22.664574   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:11:22.677942   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:11:22.681624   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:11:22.685155   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:11:22.689541   28654 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:11:22.705080   28654 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:11:22.957052   28654 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 19:11:23.469842   28654 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 19:11:23.470871   28654 kubeadm.go:310] 
	I1009 19:11:23.470925   28654 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 19:11:23.470933   28654 kubeadm.go:310] 
	I1009 19:11:23.471051   28654 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 19:11:23.471083   28654 kubeadm.go:310] 
	I1009 19:11:23.471125   28654 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 19:11:23.471223   28654 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:11:23.471271   28654 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:11:23.471296   28654 kubeadm.go:310] 
	I1009 19:11:23.471380   28654 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 19:11:23.471393   28654 kubeadm.go:310] 
	I1009 19:11:23.471455   28654 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:11:23.471464   28654 kubeadm.go:310] 
	I1009 19:11:23.471537   28654 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 19:11:23.471641   28654 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:11:23.471738   28654 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:11:23.471753   28654 kubeadm.go:310] 
	I1009 19:11:23.471870   28654 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:11:23.471974   28654 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 19:11:23.471984   28654 kubeadm.go:310] 
	I1009 19:11:23.472086   28654 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n3skeb.bws3ifw22cumajmm \
	I1009 19:11:23.472234   28654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 19:11:23.472263   28654 kubeadm.go:310] 	--control-plane 
	I1009 19:11:23.472276   28654 kubeadm.go:310] 
	I1009 19:11:23.472382   28654 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:11:23.472392   28654 kubeadm.go:310] 
	I1009 19:11:23.472488   28654 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n3skeb.bws3ifw22cumajmm \
	I1009 19:11:23.472616   28654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 19:11:23.473525   28654 kubeadm.go:310] W1009 19:11:12.257145     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:11:23.473837   28654 kubeadm.go:310] W1009 19:11:12.259703     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:11:23.473994   28654 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:11:23.474033   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:11:23.474046   28654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:11:23.475963   28654 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 19:11:23.477363   28654 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:11:23.483529   28654 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 19:11:23.483553   28654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:11:23.504303   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:11:23.863157   28654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:11:23.863274   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:23.863284   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780 minikube.k8s.io/updated_at=2024_10_09T19_11_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=true
	I1009 19:11:23.884152   28654 ops.go:34] apiserver oom_adj: -16
	I1009 19:11:24.005714   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:24.506374   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:25.006091   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:25.506438   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:26.006141   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:26.506040   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.006400   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.505831   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.598386   28654 kubeadm.go:1113] duration metric: took 3.735177044s to wait for elevateKubeSystemPrivileges
	I1009 19:11:27.598425   28654 kubeadm.go:394] duration metric: took 15.5709527s to StartCluster
	I1009 19:11:27.598446   28654 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:27.598527   28654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:11:27.599166   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:27.599347   28654 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:27.599374   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:11:27.599357   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:11:27.599375   28654 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:11:27.599458   28654 addons.go:69] Setting storage-provisioner=true in profile "ha-199780"
	I1009 19:11:27.599469   28654 addons.go:69] Setting default-storageclass=true in profile "ha-199780"
	I1009 19:11:27.599477   28654 addons.go:234] Setting addon storage-provisioner=true in "ha-199780"
	I1009 19:11:27.599485   28654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-199780"
	I1009 19:11:27.599503   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:27.599506   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:27.599886   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.599927   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.599929   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.599968   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.614342   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I1009 19:11:27.614587   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I1009 19:11:27.614820   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.615004   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.615360   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.615381   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.615494   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.615521   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.615770   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.615869   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.615936   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.616437   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.616482   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.618027   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:11:27.618409   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:11:27.618933   28654 cert_rotation.go:140] Starting client certificate rotation controller
	I1009 19:11:27.619199   28654 addons.go:234] Setting addon default-storageclass=true in "ha-199780"
	I1009 19:11:27.619240   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:27.619589   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.619644   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.631880   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I1009 19:11:27.632439   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.632953   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.632968   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.633306   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.633511   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.633650   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I1009 19:11:27.634127   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.634757   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.634777   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.635148   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.635306   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:27.635705   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.635747   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.637278   28654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:11:27.638972   28654 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:11:27.638992   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:11:27.639008   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:27.642192   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.642642   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:27.642674   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.642796   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:27.642968   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:27.643174   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:27.643344   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:27.651531   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I1009 19:11:27.652010   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.652633   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.652663   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.652996   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.653186   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.654702   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:27.654903   28654 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:11:27.654916   28654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:11:27.654931   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:27.657462   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.657809   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:27.657834   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.657997   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:27.658162   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:27.658275   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:27.658409   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:27.708249   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:11:27.824778   28654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:11:27.831460   28654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:11:28.120955   28654 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1009 19:11:28.573087   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573114   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573134   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573150   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573505   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573520   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573544   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573545   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573557   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573510   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573628   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573649   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573658   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573565   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573900   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573917   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573930   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573931   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573940   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573984   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.574002   28654 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:11:28.574017   28654 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:11:28.574123   28654 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1009 19:11:28.574129   28654 round_trippers.go:469] Request Headers:
	I1009 19:11:28.574140   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:11:28.574147   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:11:28.586337   28654 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1009 19:11:28.587207   28654 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1009 19:11:28.587225   28654 round_trippers.go:469] Request Headers:
	I1009 19:11:28.587233   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:11:28.587241   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:11:28.587251   28654 round_trippers.go:473]     Content-Type: application/json
	I1009 19:11:28.594277   28654 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 19:11:28.594441   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.594457   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.594703   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.594721   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.596581   28654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1009 19:11:28.597699   28654 addons.go:510] duration metric: took 998.327173ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 19:11:28.597726   28654 start.go:246] waiting for cluster config update ...
	I1009 19:11:28.597735   28654 start.go:255] writing updated cluster config ...
	I1009 19:11:28.599169   28654 out.go:201] 
	I1009 19:11:28.600456   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:28.600538   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:28.601965   28654 out.go:177] * Starting "ha-199780-m02" control-plane node in "ha-199780" cluster
	I1009 19:11:28.602974   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:11:28.602993   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:11:28.603093   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:11:28.603107   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:11:28.603182   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:28.603350   28654 start.go:360] acquireMachinesLock for ha-199780-m02: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:11:28.603394   28654 start.go:364] duration metric: took 25.364µs to acquireMachinesLock for "ha-199780-m02"
	I1009 19:11:28.603415   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:28.603505   28654 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1009 19:11:28.604883   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:11:28.604963   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:28.604996   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:28.620174   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I1009 19:11:28.620709   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:28.621235   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:28.621259   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:28.621551   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:28.621737   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:28.621880   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:28.622077   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:11:28.622107   28654 client.go:168] LocalClient.Create starting
	I1009 19:11:28.622146   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:11:28.622193   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:11:28.622213   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:11:28.622278   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:11:28.622306   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:11:28.622322   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:11:28.622345   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:11:28.622356   28654 main.go:141] libmachine: (ha-199780-m02) Calling .PreCreateCheck
	I1009 19:11:28.622534   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:28.622992   28654 main.go:141] libmachine: Creating machine...
	I1009 19:11:28.623009   28654 main.go:141] libmachine: (ha-199780-m02) Calling .Create
	I1009 19:11:28.623202   28654 main.go:141] libmachine: (ha-199780-m02) Creating KVM machine...
	I1009 19:11:28.624414   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found existing default KVM network
	I1009 19:11:28.624553   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found existing private KVM network mk-ha-199780
	I1009 19:11:28.624697   28654 main.go:141] libmachine: (ha-199780-m02) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 ...
	I1009 19:11:28.624717   28654 main.go:141] libmachine: (ha-199780-m02) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:11:28.627180   28654 main.go:141] libmachine: (ha-199780-m02) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:11:28.627222   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:28.624673   29017 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:11:28.859004   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:28.858864   29017 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa...
	I1009 19:11:29.192250   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:29.192144   29017 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/ha-199780-m02.rawdisk...
	I1009 19:11:29.192281   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Writing magic tar header
	I1009 19:11:29.192291   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Writing SSH key tar header
	I1009 19:11:29.192299   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:29.192250   29017 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 ...
	I1009 19:11:29.192353   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02
	I1009 19:11:29.192372   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:11:29.192385   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 (perms=drwx------)
	I1009 19:11:29.192398   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:11:29.192410   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:11:29.192419   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:11:29.192426   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:11:29.192433   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home
	I1009 19:11:29.192451   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Skipping /home - not owner
	I1009 19:11:29.192471   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:11:29.192484   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:11:29.192493   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:11:29.192501   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:11:29.192508   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:11:29.192515   28654 main.go:141] libmachine: (ha-199780-m02) Creating domain...
	I1009 19:11:29.193313   28654 main.go:141] libmachine: (ha-199780-m02) define libvirt domain using xml: 
	I1009 19:11:29.193342   28654 main.go:141] libmachine: (ha-199780-m02) <domain type='kvm'>
	I1009 19:11:29.193353   28654 main.go:141] libmachine: (ha-199780-m02)   <name>ha-199780-m02</name>
	I1009 19:11:29.193360   28654 main.go:141] libmachine: (ha-199780-m02)   <memory unit='MiB'>2200</memory>
	I1009 19:11:29.193368   28654 main.go:141] libmachine: (ha-199780-m02)   <vcpu>2</vcpu>
	I1009 19:11:29.193381   28654 main.go:141] libmachine: (ha-199780-m02)   <features>
	I1009 19:11:29.193404   28654 main.go:141] libmachine: (ha-199780-m02)     <acpi/>
	I1009 19:11:29.193418   28654 main.go:141] libmachine: (ha-199780-m02)     <apic/>
	I1009 19:11:29.193448   28654 main.go:141] libmachine: (ha-199780-m02)     <pae/>
	I1009 19:11:29.193470   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193481   28654 main.go:141] libmachine: (ha-199780-m02)   </features>
	I1009 19:11:29.193502   28654 main.go:141] libmachine: (ha-199780-m02)   <cpu mode='host-passthrough'>
	I1009 19:11:29.193521   28654 main.go:141] libmachine: (ha-199780-m02)   
	I1009 19:11:29.193531   28654 main.go:141] libmachine: (ha-199780-m02)   </cpu>
	I1009 19:11:29.193548   28654 main.go:141] libmachine: (ha-199780-m02)   <os>
	I1009 19:11:29.193569   28654 main.go:141] libmachine: (ha-199780-m02)     <type>hvm</type>
	I1009 19:11:29.193584   28654 main.go:141] libmachine: (ha-199780-m02)     <boot dev='cdrom'/>
	I1009 19:11:29.193597   28654 main.go:141] libmachine: (ha-199780-m02)     <boot dev='hd'/>
	I1009 19:11:29.193605   28654 main.go:141] libmachine: (ha-199780-m02)     <bootmenu enable='no'/>
	I1009 19:11:29.193614   28654 main.go:141] libmachine: (ha-199780-m02)   </os>
	I1009 19:11:29.193622   28654 main.go:141] libmachine: (ha-199780-m02)   <devices>
	I1009 19:11:29.193631   28654 main.go:141] libmachine: (ha-199780-m02)     <disk type='file' device='cdrom'>
	I1009 19:11:29.193644   28654 main.go:141] libmachine: (ha-199780-m02)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/boot2docker.iso'/>
	I1009 19:11:29.193658   28654 main.go:141] libmachine: (ha-199780-m02)       <target dev='hdc' bus='scsi'/>
	I1009 19:11:29.193669   28654 main.go:141] libmachine: (ha-199780-m02)       <readonly/>
	I1009 19:11:29.193678   28654 main.go:141] libmachine: (ha-199780-m02)     </disk>
	I1009 19:11:29.193692   28654 main.go:141] libmachine: (ha-199780-m02)     <disk type='file' device='disk'>
	I1009 19:11:29.193703   28654 main.go:141] libmachine: (ha-199780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:11:29.193717   28654 main.go:141] libmachine: (ha-199780-m02)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/ha-199780-m02.rawdisk'/>
	I1009 19:11:29.193731   28654 main.go:141] libmachine: (ha-199780-m02)       <target dev='hda' bus='virtio'/>
	I1009 19:11:29.193743   28654 main.go:141] libmachine: (ha-199780-m02)     </disk>
	I1009 19:11:29.193752   28654 main.go:141] libmachine: (ha-199780-m02)     <interface type='network'>
	I1009 19:11:29.193764   28654 main.go:141] libmachine: (ha-199780-m02)       <source network='mk-ha-199780'/>
	I1009 19:11:29.193774   28654 main.go:141] libmachine: (ha-199780-m02)       <model type='virtio'/>
	I1009 19:11:29.193784   28654 main.go:141] libmachine: (ha-199780-m02)     </interface>
	I1009 19:11:29.193794   28654 main.go:141] libmachine: (ha-199780-m02)     <interface type='network'>
	I1009 19:11:29.193805   28654 main.go:141] libmachine: (ha-199780-m02)       <source network='default'/>
	I1009 19:11:29.193820   28654 main.go:141] libmachine: (ha-199780-m02)       <model type='virtio'/>
	I1009 19:11:29.193833   28654 main.go:141] libmachine: (ha-199780-m02)     </interface>
	I1009 19:11:29.193841   28654 main.go:141] libmachine: (ha-199780-m02)     <serial type='pty'>
	I1009 19:11:29.193855   28654 main.go:141] libmachine: (ha-199780-m02)       <target port='0'/>
	I1009 19:11:29.193865   28654 main.go:141] libmachine: (ha-199780-m02)     </serial>
	I1009 19:11:29.193871   28654 main.go:141] libmachine: (ha-199780-m02)     <console type='pty'>
	I1009 19:11:29.193881   28654 main.go:141] libmachine: (ha-199780-m02)       <target type='serial' port='0'/>
	I1009 19:11:29.193890   28654 main.go:141] libmachine: (ha-199780-m02)     </console>
	I1009 19:11:29.193901   28654 main.go:141] libmachine: (ha-199780-m02)     <rng model='virtio'>
	I1009 19:11:29.193911   28654 main.go:141] libmachine: (ha-199780-m02)       <backend model='random'>/dev/random</backend>
	I1009 19:11:29.193933   28654 main.go:141] libmachine: (ha-199780-m02)     </rng>
	I1009 19:11:29.193946   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193962   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193978   28654 main.go:141] libmachine: (ha-199780-m02)   </devices>
	I1009 19:11:29.193990   28654 main.go:141] libmachine: (ha-199780-m02) </domain>
	I1009 19:11:29.193999   28654 main.go:141] libmachine: (ha-199780-m02) 
	I1009 19:11:29.200233   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:9f:20:14 in network default
	I1009 19:11:29.200751   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring networks are active...
	I1009 19:11:29.200778   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:29.201355   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring network default is active
	I1009 19:11:29.201602   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring network mk-ha-199780 is active
	I1009 19:11:29.201876   28654 main.go:141] libmachine: (ha-199780-m02) Getting domain xml...
	I1009 19:11:29.202487   28654 main.go:141] libmachine: (ha-199780-m02) Creating domain...
	I1009 19:11:30.395985   28654 main.go:141] libmachine: (ha-199780-m02) Waiting to get IP...
	I1009 19:11:30.396850   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.397221   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.397245   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.397192   29017 retry.go:31] will retry after 306.623748ms: waiting for machine to come up
	I1009 19:11:30.705681   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.706111   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.706142   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.706073   29017 retry.go:31] will retry after 272.886306ms: waiting for machine to come up
	I1009 19:11:30.980636   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.981119   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.981146   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.981081   29017 retry.go:31] will retry after 373.250902ms: waiting for machine to come up
	I1009 19:11:31.355561   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:31.355953   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:31.355981   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:31.355905   29017 retry.go:31] will retry after 402.386513ms: waiting for machine to come up
	I1009 19:11:31.759650   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:31.760178   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:31.760204   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:31.760143   29017 retry.go:31] will retry after 700.718844ms: waiting for machine to come up
	I1009 19:11:32.462533   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:32.462970   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:32.462999   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:32.462916   29017 retry.go:31] will retry after 892.701908ms: waiting for machine to come up
	I1009 19:11:33.357278   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:33.357677   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:33.357700   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:33.357645   29017 retry.go:31] will retry after 892.900741ms: waiting for machine to come up
	I1009 19:11:34.252184   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:34.252581   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:34.252605   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:34.252542   29017 retry.go:31] will retry after 919.729577ms: waiting for machine to come up
	I1009 19:11:35.174060   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:35.174445   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:35.174475   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:35.174422   29017 retry.go:31] will retry after 1.688669614s: waiting for machine to come up
	I1009 19:11:36.865075   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:36.865384   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:36.865412   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:36.865340   29017 retry.go:31] will retry after 1.768384485s: waiting for machine to come up
	I1009 19:11:38.635106   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:38.635545   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:38.635574   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:38.635487   29017 retry.go:31] will retry after 2.193559284s: waiting for machine to come up
	I1009 19:11:40.831238   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:40.831740   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:40.831780   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:40.831709   29017 retry.go:31] will retry after 3.434402997s: waiting for machine to come up
	I1009 19:11:44.267146   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:44.267644   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:44.267671   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:44.267602   29017 retry.go:31] will retry after 4.164642466s: waiting for machine to come up
	I1009 19:11:48.436657   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:48.436991   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:48.437015   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:48.436952   29017 retry.go:31] will retry after 3.860630111s: waiting for machine to come up
	I1009 19:11:52.302118   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.302487   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has current primary IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.302554   28654 main.go:141] libmachine: (ha-199780-m02) Found IP for machine: 192.168.39.83
	I1009 19:11:52.302579   28654 main.go:141] libmachine: (ha-199780-m02) Reserving static IP address...
	I1009 19:11:52.302886   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find host DHCP lease matching {name: "ha-199780-m02", mac: "52:54:00:49:9d:cf", ip: "192.168.39.83"} in network mk-ha-199780
	I1009 19:11:52.372076   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Getting to WaitForSSH function...
	I1009 19:11:52.372102   28654 main.go:141] libmachine: (ha-199780-m02) Reserved static IP address: 192.168.39.83
	I1009 19:11:52.372115   28654 main.go:141] libmachine: (ha-199780-m02) Waiting for SSH to be available...
	I1009 19:11:52.374841   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.375419   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.375450   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.375560   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using SSH client type: external
	I1009 19:11:52.375580   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa (-rw-------)
	I1009 19:11:52.375612   28654 main.go:141] libmachine: (ha-199780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:11:52.375635   28654 main.go:141] libmachine: (ha-199780-m02) DBG | About to run SSH command:
	I1009 19:11:52.375646   28654 main.go:141] libmachine: (ha-199780-m02) DBG | exit 0
	I1009 19:11:52.498886   28654 main.go:141] libmachine: (ha-199780-m02) DBG | SSH cmd err, output: <nil>: 
	I1009 19:11:52.499168   28654 main.go:141] libmachine: (ha-199780-m02) KVM machine creation complete!
	I1009 19:11:52.499479   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:52.500069   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:52.500241   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:52.500393   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:11:52.500411   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetState
	I1009 19:11:52.501707   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:11:52.501728   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:11:52.501749   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:11:52.501756   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.503758   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.504142   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.504165   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.504286   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.504437   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.504575   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.504686   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.504794   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.504979   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.504989   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:11:52.602177   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:52.602204   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:11:52.602213   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.604728   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.605107   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.605141   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.605291   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.605469   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.605606   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.605724   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.605872   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.606034   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.606045   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:11:52.703707   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:11:52.703764   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:11:52.703771   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:11:52.703777   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.704032   28654 buildroot.go:166] provisioning hostname "ha-199780-m02"
	I1009 19:11:52.704060   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.704231   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.706798   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.707185   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.707208   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.707350   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.707510   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.707650   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.707773   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.707888   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.708063   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.708075   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780-m02 && echo "ha-199780-m02" | sudo tee /etc/hostname
	I1009 19:11:52.823258   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780-m02
	
	I1009 19:11:52.823287   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.825577   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.825861   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.825888   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.826053   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.826228   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.826361   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.826462   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.826604   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.826970   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.827005   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:52.936284   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:52.936322   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:11:52.936338   28654 buildroot.go:174] setting up certificates
	I1009 19:11:52.936349   28654 provision.go:84] configureAuth start
	I1009 19:11:52.936358   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.936621   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:52.939014   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.939357   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.939378   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.939565   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.941751   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.942083   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.942102   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.942262   28654 provision.go:143] copyHostCerts
	I1009 19:11:52.942292   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:52.942326   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:11:52.942335   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:52.942400   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:11:52.942490   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:52.942507   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:11:52.942513   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:52.942543   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:11:52.942586   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:52.942603   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:11:52.942608   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:52.942630   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:11:52.942675   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780-m02 san=[127.0.0.1 192.168.39.83 ha-199780-m02 localhost minikube]
	I1009 19:11:53.040172   28654 provision.go:177] copyRemoteCerts
	I1009 19:11:53.040224   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:53.040246   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.042771   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.043144   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.043165   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.043339   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.043536   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.043695   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.043830   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.125536   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:11:53.125611   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:11:53.152398   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:11:53.152462   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:11:53.176418   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:11:53.176476   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:11:53.199215   28654 provision.go:87] duration metric: took 262.855174ms to configureAuth
	I1009 19:11:53.199238   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:11:53.199408   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:53.199489   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.202051   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.202440   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.202470   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.202579   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.202742   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.202905   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.203044   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.203213   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:53.203367   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:53.203381   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:53.429894   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:53.429922   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:11:53.429933   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetURL
	I1009 19:11:53.431192   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using libvirt version 6000000
	I1009 19:11:53.433633   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.433917   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.433942   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.434095   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:11:53.434111   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:11:53.434119   28654 client.go:171] duration metric: took 24.812002035s to LocalClient.Create
	I1009 19:11:53.434141   28654 start.go:167] duration metric: took 24.812066243s to libmachine.API.Create "ha-199780"
	I1009 19:11:53.434153   28654 start.go:293] postStartSetup for "ha-199780-m02" (driver="kvm2")
	I1009 19:11:53.434164   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:53.434178   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.434386   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:53.434414   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.436444   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.436741   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.436766   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.436885   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.437048   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.437204   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.437329   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.517247   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:53.521546   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:11:53.521570   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:11:53.521628   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:11:53.521696   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:11:53.521706   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:11:53.521794   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:11:53.531170   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:53.555463   28654 start.go:296] duration metric: took 121.295956ms for postStartSetup
	I1009 19:11:53.555509   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:53.556089   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:53.558610   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.558965   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.558990   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.559241   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:53.559417   28654 start.go:128] duration metric: took 24.955894473s to createHost
	I1009 19:11:53.559436   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.561758   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.562120   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.562145   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.562297   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.562466   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.562603   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.562686   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.562800   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:53.562944   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:53.562953   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:11:53.659740   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501113.618380735
	
	I1009 19:11:53.659761   28654 fix.go:216] guest clock: 1728501113.618380735
	I1009 19:11:53.659770   28654 fix.go:229] Guest: 2024-10-09 19:11:53.618380735 +0000 UTC Remote: 2024-10-09 19:11:53.559427397 +0000 UTC m=+71.164621077 (delta=58.953338ms)
	I1009 19:11:53.659789   28654 fix.go:200] guest clock delta is within tolerance: 58.953338ms
	I1009 19:11:53.659795   28654 start.go:83] releasing machines lock for "ha-199780-m02", held for 25.056389443s
	I1009 19:11:53.659818   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.660047   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:53.662723   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.663038   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.663084   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.665166   28654 out.go:177] * Found network options:
	I1009 19:11:53.666287   28654 out.go:177]   - NO_PROXY=192.168.39.114
	W1009 19:11:53.667466   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:11:53.667505   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.667962   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.668130   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.668248   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:53.668296   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	W1009 19:11:53.668300   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:11:53.668381   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:53.668416   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.670930   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671210   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671283   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.671304   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671447   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.671527   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.671552   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671587   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.671735   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.671750   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.671893   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.671912   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.672014   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.672148   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.899517   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:53.905678   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:53.905741   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:53.922185   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:11:53.922206   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:11:53.922263   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:53.937820   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:53.953029   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:11:53.953091   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:53.967078   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:53.981025   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:54.113745   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:54.255530   28654 docker.go:233] disabling docker service ...
	I1009 19:11:54.255587   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:54.270170   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:54.283110   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:54.427830   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:54.542861   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:54.559019   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:54.577775   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:11:54.577834   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.588489   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:11:54.588563   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.598988   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.609116   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.619104   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:54.629621   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.640002   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.656572   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.666994   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:54.677176   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:11:54.677232   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:11:54.689637   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:54.698765   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:54.819897   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:54.911734   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:54.911789   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:54.916451   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:11:54.916494   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:11:54.920158   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:11:54.955402   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:11:54.955480   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:54.982980   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:55.012563   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:11:55.013723   28654 out.go:177]   - env NO_PROXY=192.168.39.114
	I1009 19:11:55.014768   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:55.017153   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:55.017506   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:55.017538   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:55.017692   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:55.021943   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:55.034196   28654 mustload.go:65] Loading cluster: ha-199780
	I1009 19:11:55.034432   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:55.034865   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:55.034912   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:55.049583   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I1009 19:11:55.050018   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:55.050467   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:55.050491   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:55.050776   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:55.050944   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:55.052331   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:55.052611   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:55.052643   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:55.066531   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I1009 19:11:55.066862   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:55.067348   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:55.067376   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:55.067659   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:55.067826   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:55.067945   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.83
	I1009 19:11:55.067956   28654 certs.go:194] generating shared ca certs ...
	I1009 19:11:55.067973   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.068103   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:11:55.068159   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:11:55.068171   28654 certs.go:256] generating profile certs ...
	I1009 19:11:55.068256   28654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:11:55.068286   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0
	I1009 19:11:55.068307   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.254]
	I1009 19:11:55.274614   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 ...
	I1009 19:11:55.274645   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0: {Name:mkea8c047205788ccead22201bc77c7190717cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.274816   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0 ...
	I1009 19:11:55.274832   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0: {Name:mk98b6fcd80ec856f6c63ddb6177c8a08e2dbf7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.274920   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:11:55.275082   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:11:55.275255   28654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:11:55.275273   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:11:55.275291   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:11:55.275308   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:11:55.275327   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:11:55.275347   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:11:55.275366   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:11:55.275383   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:11:55.275401   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:11:55.275466   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:11:55.275511   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:55.275524   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:11:55.275558   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:11:55.275590   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:55.275622   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:11:55.275679   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:55.275720   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.275740   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.275758   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.275797   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:55.278862   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:55.279369   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:55.279395   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:55.279612   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:55.279780   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:55.279952   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:55.280049   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:55.351381   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:11:55.355961   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:11:55.367055   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:11:55.371613   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:11:55.382154   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:11:55.386133   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:11:55.395984   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:11:55.399714   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1009 19:11:55.409621   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:11:55.413853   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:11:55.423766   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:11:55.427525   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1009 19:11:55.437575   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:55.462624   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:11:55.485719   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:55.508128   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:11:55.530803   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 19:11:55.555486   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:55.580139   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:55.603207   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:55.626373   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:11:55.649676   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:11:55.673656   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:55.696721   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:11:55.712647   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:11:55.728611   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:11:55.744619   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1009 19:11:55.760726   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:11:55.776763   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1009 19:11:55.792315   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:11:55.807929   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:11:55.813442   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:55.823376   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.827581   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.827627   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.833072   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:55.842843   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:11:55.852649   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.856766   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.856802   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.862146   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:55.872016   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:11:55.881805   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.885859   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.885905   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.891246   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:55.901096   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:55.904965   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:11:55.905009   28654 kubeadm.go:934] updating node {m02 192.168.39.83 8443 v1.31.1 crio true true} ...
	I1009 19:11:55.905077   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:55.905098   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:11:55.905121   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:11:55.919709   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:11:55.919759   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:11:55.919801   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:55.929228   28654 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1009 19:11:55.929276   28654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:55.938319   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1009 19:11:55.938340   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:11:55.938391   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:11:55.938402   28654 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1009 19:11:55.938404   28654 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1009 19:11:55.942635   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1009 19:11:55.942660   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1009 19:11:57.241263   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:11:57.255221   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:11:57.255304   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:11:57.259158   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1009 19:11:57.259186   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1009 19:11:57.547794   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:11:57.547883   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:11:57.562384   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1009 19:11:57.562426   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1009 19:11:57.842477   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:11:57.852027   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 19:11:57.867591   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:57.883108   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:11:57.898843   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:57.902642   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:57.914959   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:58.028127   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:58.044965   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:58.045423   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:58.045473   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:58.059986   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I1009 19:11:58.060458   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:58.060917   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:58.060934   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:58.061238   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:58.061410   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:58.061538   28654 start.go:317] joinCluster: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:58.061653   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 19:11:58.061673   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:58.064589   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:58.064969   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:58.064994   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:58.065152   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:58.065308   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:58.065538   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:58.065661   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:58.210321   28654 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:58.210383   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kfwmel.rmtx9gjzbnc80w0m --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m02 --control-plane --apiserver-advertise-address=192.168.39.83 --apiserver-bind-port=8443"
	I1009 19:12:19.134246   28654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kfwmel.rmtx9gjzbnc80w0m --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m02 --control-plane --apiserver-advertise-address=192.168.39.83 --apiserver-bind-port=8443": (20.923839028s)
	I1009 19:12:19.134290   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 19:12:19.605010   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780-m02 minikube.k8s.io/updated_at=2024_10_09T19_12_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=false
	I1009 19:12:19.748442   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-199780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1009 19:12:19.868185   28654 start.go:319] duration metric: took 21.806636434s to joinCluster
	I1009 19:12:19.868265   28654 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:12:19.868592   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:12:19.870842   28654 out.go:177] * Verifying Kubernetes components...
	I1009 19:12:19.872112   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:12:20.132051   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:12:20.184872   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:12:20.185127   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:12:20.185184   28654 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I1009 19:12:20.185366   28654 node_ready.go:35] waiting up to 6m0s for node "ha-199780-m02" to be "Ready" ...
	I1009 19:12:20.185447   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:20.185457   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:20.185464   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:20.185468   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:20.196121   28654 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1009 19:12:20.685641   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:20.685666   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:20.685677   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:20.685683   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:20.700948   28654 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1009 19:12:21.186360   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:21.186379   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:21.186386   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:21.186390   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:21.190077   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:21.686495   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:21.686523   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:21.686535   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:21.686542   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:21.689757   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:22.185915   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:22.185938   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:22.185949   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:22.185955   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:22.189220   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:22.189830   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:22.685885   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:22.685909   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:22.685925   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:22.685930   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:22.692565   28654 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 19:12:23.186131   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:23.186153   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:23.186163   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:23.186170   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:23.190703   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:23.685823   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:23.685851   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:23.685864   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:23.685874   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:23.689295   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:24.186259   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:24.186290   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:24.186302   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:24.186308   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:24.190419   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:24.190953   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:24.686386   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:24.686405   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:24.686412   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:24.686418   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:24.689349   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:25.186405   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:25.186431   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:25.186443   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:25.186448   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:25.189677   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:25.685894   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:25.685917   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:25.685930   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:25.685938   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:25.688721   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:26.185700   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:26.185718   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:26.185725   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:26.185729   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:26.189091   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:26.686200   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:26.686219   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:26.686227   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:26.686233   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:26.691177   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:26.691800   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:27.186166   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:27.186200   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:27.186216   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:27.186227   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:27.208799   28654 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1009 19:12:27.686569   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:27.686596   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:27.686606   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:27.686611   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:27.690120   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:28.186542   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:28.186562   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:28.186570   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:28.186574   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:28.189659   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:28.685814   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:28.685834   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:28.685842   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:28.685846   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:28.689015   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:29.185658   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:29.185692   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:29.185703   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:29.185708   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:29.188963   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:29.189656   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:29.686079   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:29.686104   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:29.686115   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:29.686119   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:29.689437   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:30.186344   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:30.186367   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:30.186378   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:30.186384   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:30.189946   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:30.685870   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:30.685896   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:30.685904   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:30.685909   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:30.689100   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:31.186316   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:31.186342   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:31.186351   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:31.186356   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:31.189992   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:31.190453   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:31.685857   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:31.685878   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:31.685886   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:31.685890   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:31.689411   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:32.186413   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:32.186439   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:32.186450   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:32.186457   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:32.189297   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:32.686105   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:32.686126   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:32.686134   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:32.686138   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:32.689698   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.185993   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:33.186015   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:33.186024   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:33.186028   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:33.189373   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.685932   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:33.685955   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:33.685963   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:33.685968   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:33.689670   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.690285   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:34.185640   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:34.185662   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:34.185670   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:34.185674   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:34.188694   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:34.686203   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:34.686223   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:34.686231   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:34.686235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:34.690146   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:35.185607   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:35.185628   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:35.185636   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:35.185640   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:35.188854   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:35.685726   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:35.685746   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:35.685759   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:35.685764   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:35.689172   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:36.186278   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:36.186301   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:36.186308   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:36.186312   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:36.189767   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:36.190519   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:36.685809   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:36.685841   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:36.685849   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:36.685853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:36.688923   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:37.185894   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:37.185920   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:37.185933   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:37.185940   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:37.189465   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:37.686197   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:37.686222   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:37.686230   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:37.686235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:37.689394   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.185922   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:38.185948   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:38.185956   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:38.185961   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:38.189255   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.685706   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:38.685729   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:38.685742   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:38.685751   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:38.689204   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.689971   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:39.186413   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.186433   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.186447   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.186452   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.189522   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.190154   28654 node_ready.go:49] node "ha-199780-m02" has status "Ready":"True"
	I1009 19:12:39.190172   28654 node_ready.go:38] duration metric: took 19.004790985s for node "ha-199780-m02" to be "Ready" ...
	I1009 19:12:39.190183   28654 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:12:39.190256   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:39.190268   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.190277   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.190292   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.194625   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:39.201057   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.201129   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-r8lg7
	I1009 19:12:39.201137   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.201144   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.201149   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.203552   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.204277   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.204291   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.204298   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.204303   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.206434   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.207017   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.207033   28654 pod_ready.go:82] duration metric: took 5.954504ms for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.207041   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.207118   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v5k75
	I1009 19:12:39.207128   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.207139   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.207148   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.209367   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.210180   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.210198   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.210204   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.210207   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.212254   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.212911   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.212929   28654 pod_ready.go:82] duration metric: took 5.881939ms for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.212939   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.212996   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780
	I1009 19:12:39.213004   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.213010   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.213014   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.215519   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.216198   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.216212   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.216222   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.216228   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.218680   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.219274   28654 pod_ready.go:93] pod "etcd-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.219293   28654 pod_ready.go:82] duration metric: took 6.345815ms for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.219306   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.219361   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m02
	I1009 19:12:39.219370   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.219379   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.219388   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.222905   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.223852   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.223867   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.223874   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.223880   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.226122   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.226546   28654 pod_ready.go:93] pod "etcd-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.226559   28654 pod_ready.go:82] duration metric: took 7.244216ms for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.226571   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.386954   28654 request.go:632] Waited for 160.312334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:12:39.387019   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:12:39.387028   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.387041   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.387059   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.390052   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.587135   28654 request.go:632] Waited for 196.31885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.587196   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.587203   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.587211   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.587219   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.590448   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.591164   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.591183   28654 pod_ready.go:82] duration metric: took 364.606313ms for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.591192   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.787247   28654 request.go:632] Waited for 195.987261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:12:39.787328   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:12:39.787335   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.787346   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.787354   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.790620   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.986772   28654 request.go:632] Waited for 195.363358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.986825   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.986830   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.986837   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.986840   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.990003   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.990664   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.990682   28654 pod_ready.go:82] duration metric: took 399.483816ms for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.990691   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.186433   28654 request.go:632] Waited for 195.681011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:12:40.186513   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:12:40.186524   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.186535   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.186544   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.189683   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.386818   28654 request.go:632] Waited for 196.355604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:40.386887   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:40.386893   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.386900   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.386905   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.391133   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:40.391614   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:40.391638   28654 pod_ready.go:82] duration metric: took 400.93972ms for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.391651   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.586680   28654 request.go:632] Waited for 194.949325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:12:40.586737   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:12:40.586742   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.586750   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.586755   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.590444   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.786422   28654 request.go:632] Waited for 195.280915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:40.786495   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:40.786501   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.786509   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.786513   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.790326   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.791006   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:40.791029   28654 pod_ready.go:82] duration metric: took 399.365639ms for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.791046   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.987070   28654 request.go:632] Waited for 195.933748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:12:40.987131   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:12:40.987136   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.987143   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.987147   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.990605   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.186624   28654 request.go:632] Waited for 195.268606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.186692   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.186704   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.186711   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.186715   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.189956   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.190470   28654 pod_ready.go:93] pod "kube-proxy-n8ffq" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.190489   28654 pod_ready.go:82] duration metric: took 399.435329ms for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.190501   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.386649   28654 request.go:632] Waited for 196.07336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:12:41.386701   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:12:41.386706   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.386713   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.386716   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.390032   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.587033   28654 request.go:632] Waited for 196.334104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:41.587126   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:41.587138   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.587149   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.587167   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.590021   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:41.590641   28654 pod_ready.go:93] pod "kube-proxy-zfsq8" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.590663   28654 pod_ready.go:82] duration metric: took 400.153892ms for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.590678   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.786648   28654 request.go:632] Waited for 195.890444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:12:41.786701   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:12:41.786708   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.786719   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.786729   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.789369   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:41.987345   28654 request.go:632] Waited for 197.361828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.987411   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.987416   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.987424   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.987427   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.990745   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.991278   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.991294   28654 pod_ready.go:82] duration metric: took 400.607782ms for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.991303   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:42.187413   28654 request.go:632] Waited for 196.036626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:12:42.187472   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:12:42.187478   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.187488   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.187495   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.190480   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:42.386422   28654 request.go:632] Waited for 195.271897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:42.386476   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:42.386482   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.386489   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.386493   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.389175   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:42.389733   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:42.389754   28654 pod_ready.go:82] duration metric: took 398.44435ms for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:42.389768   28654 pod_ready.go:39] duration metric: took 3.199572136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:12:42.389785   28654 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:12:42.389849   28654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:42.407811   28654 api_server.go:72] duration metric: took 22.539512335s to wait for apiserver process to appear ...
	I1009 19:12:42.407834   28654 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:12:42.407855   28654 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1009 19:12:42.414877   28654 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1009 19:12:42.414962   28654 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I1009 19:12:42.414974   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.414984   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.414991   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.416098   28654 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 19:12:42.416185   28654 api_server.go:141] control plane version: v1.31.1
	I1009 19:12:42.416202   28654 api_server.go:131] duration metric: took 8.360977ms to wait for apiserver health ...
	I1009 19:12:42.416212   28654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:12:42.587017   28654 request.go:632] Waited for 170.742751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.587127   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.587142   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.587151   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.587157   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.592323   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:12:42.596935   28654 system_pods.go:59] 17 kube-system pods found
	I1009 19:12:42.596960   28654 system_pods.go:61] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:12:42.596966   28654 system_pods.go:61] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:12:42.596971   28654 system_pods.go:61] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:12:42.596974   28654 system_pods.go:61] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:12:42.596977   28654 system_pods.go:61] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:12:42.596980   28654 system_pods.go:61] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:12:42.596983   28654 system_pods.go:61] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:12:42.596991   28654 system_pods.go:61] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:12:42.596995   28654 system_pods.go:61] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:12:42.597000   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:12:42.597004   28654 system_pods.go:61] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:12:42.597007   28654 system_pods.go:61] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:12:42.597011   28654 system_pods.go:61] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:12:42.597015   28654 system_pods.go:61] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:12:42.597018   28654 system_pods.go:61] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:12:42.597023   28654 system_pods.go:61] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:12:42.597026   28654 system_pods.go:61] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:12:42.597031   28654 system_pods.go:74] duration metric: took 180.813466ms to wait for pod list to return data ...
	I1009 19:12:42.597039   28654 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:12:42.787461   28654 request.go:632] Waited for 190.355387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:12:42.787510   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:12:42.787515   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.787523   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.787526   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.791707   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:42.791908   28654 default_sa.go:45] found service account: "default"
	I1009 19:12:42.791921   28654 default_sa.go:55] duration metric: took 194.876803ms for default service account to be created ...
	I1009 19:12:42.791929   28654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:12:42.987347   28654 request.go:632] Waited for 195.347718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.987402   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.987407   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.987415   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.987418   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.992125   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:42.996490   28654 system_pods.go:86] 17 kube-system pods found
	I1009 19:12:42.996520   28654 system_pods.go:89] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:12:42.996536   28654 system_pods.go:89] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:12:42.996541   28654 system_pods.go:89] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:12:42.996545   28654 system_pods.go:89] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:12:42.996552   28654 system_pods.go:89] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:12:42.996564   28654 system_pods.go:89] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:12:42.996567   28654 system_pods.go:89] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:12:42.996571   28654 system_pods.go:89] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:12:42.996576   28654 system_pods.go:89] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:12:42.996580   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:12:42.996583   28654 system_pods.go:89] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:12:42.996587   28654 system_pods.go:89] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:12:42.996591   28654 system_pods.go:89] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:12:42.996594   28654 system_pods.go:89] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:12:42.996598   28654 system_pods.go:89] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:12:42.996603   28654 system_pods.go:89] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:12:42.996605   28654 system_pods.go:89] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:12:42.996612   28654 system_pods.go:126] duration metric: took 204.678176ms to wait for k8s-apps to be running ...
	I1009 19:12:42.996621   28654 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:12:42.996661   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:12:43.012943   28654 system_svc.go:56] duration metric: took 16.312977ms WaitForService to wait for kubelet
	I1009 19:12:43.012964   28654 kubeadm.go:582] duration metric: took 23.14466791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:12:43.012979   28654 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:12:43.186683   28654 request.go:632] Waited for 173.643549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I1009 19:12:43.186731   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I1009 19:12:43.186737   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:43.186744   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:43.186750   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:43.190743   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:43.191568   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:12:43.191597   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:12:43.191608   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:12:43.191612   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:12:43.191618   28654 node_conditions.go:105] duration metric: took 178.633815ms to run NodePressure ...
	I1009 19:12:43.191635   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:12:43.191663   28654 start.go:255] writing updated cluster config ...
	I1009 19:12:43.193878   28654 out.go:201] 
	I1009 19:12:43.195204   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:12:43.195296   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:12:43.196947   28654 out.go:177] * Starting "ha-199780-m03" control-plane node in "ha-199780" cluster
	I1009 19:12:43.198242   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:12:43.198257   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:12:43.198354   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:12:43.198368   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:12:43.198453   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:12:43.198644   28654 start.go:360] acquireMachinesLock for ha-199780-m03: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:12:43.198693   28654 start.go:364] duration metric: took 30.243µs to acquireMachinesLock for "ha-199780-m03"
	I1009 19:12:43.198715   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:12:43.198839   28654 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1009 19:12:43.200292   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:12:43.200365   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:12:43.200395   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:12:43.215501   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I1009 19:12:43.215883   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:12:43.216432   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:12:43.216461   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:12:43.216780   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:12:43.216973   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:12:43.217128   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:12:43.217269   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:12:43.217296   28654 client.go:168] LocalClient.Create starting
	I1009 19:12:43.217327   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:12:43.217360   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:12:43.217379   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:12:43.217439   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:12:43.217464   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:12:43.217486   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:12:43.217518   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:12:43.217529   28654 main.go:141] libmachine: (ha-199780-m03) Calling .PreCreateCheck
	I1009 19:12:43.217680   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:12:43.218031   28654 main.go:141] libmachine: Creating machine...
	I1009 19:12:43.218043   28654 main.go:141] libmachine: (ha-199780-m03) Calling .Create
	I1009 19:12:43.218158   28654 main.go:141] libmachine: (ha-199780-m03) Creating KVM machine...
	I1009 19:12:43.219370   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found existing default KVM network
	I1009 19:12:43.219545   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found existing private KVM network mk-ha-199780
	I1009 19:12:43.219670   28654 main.go:141] libmachine: (ha-199780-m03) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 ...
	I1009 19:12:43.219694   28654 main.go:141] libmachine: (ha-199780-m03) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:12:43.219770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.219647   29426 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:12:43.219839   28654 main.go:141] libmachine: (ha-199780-m03) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:12:43.456571   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.456478   29426 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa...
	I1009 19:12:43.637087   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.637007   29426 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/ha-199780-m03.rawdisk...
	I1009 19:12:43.637111   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Writing magic tar header
	I1009 19:12:43.637123   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Writing SSH key tar header
	I1009 19:12:43.637132   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.637111   29426 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 ...
	I1009 19:12:43.637237   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03
	I1009 19:12:43.637256   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 (perms=drwx------)
	I1009 19:12:43.637263   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:12:43.637277   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:12:43.637285   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:12:43.637293   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:12:43.637301   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:12:43.637308   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:12:43.637313   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home
	I1009 19:12:43.637322   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Skipping /home - not owner
	I1009 19:12:43.637330   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:12:43.637338   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:12:43.637345   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:12:43.637355   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:12:43.637364   28654 main.go:141] libmachine: (ha-199780-m03) Creating domain...
	I1009 19:12:43.638194   28654 main.go:141] libmachine: (ha-199780-m03) define libvirt domain using xml: 
	I1009 19:12:43.638216   28654 main.go:141] libmachine: (ha-199780-m03) <domain type='kvm'>
	I1009 19:12:43.638226   28654 main.go:141] libmachine: (ha-199780-m03)   <name>ha-199780-m03</name>
	I1009 19:12:43.638239   28654 main.go:141] libmachine: (ha-199780-m03)   <memory unit='MiB'>2200</memory>
	I1009 19:12:43.638251   28654 main.go:141] libmachine: (ha-199780-m03)   <vcpu>2</vcpu>
	I1009 19:12:43.638258   28654 main.go:141] libmachine: (ha-199780-m03)   <features>
	I1009 19:12:43.638266   28654 main.go:141] libmachine: (ha-199780-m03)     <acpi/>
	I1009 19:12:43.638275   28654 main.go:141] libmachine: (ha-199780-m03)     <apic/>
	I1009 19:12:43.638288   28654 main.go:141] libmachine: (ha-199780-m03)     <pae/>
	I1009 19:12:43.638296   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638304   28654 main.go:141] libmachine: (ha-199780-m03)   </features>
	I1009 19:12:43.638314   28654 main.go:141] libmachine: (ha-199780-m03)   <cpu mode='host-passthrough'>
	I1009 19:12:43.638338   28654 main.go:141] libmachine: (ha-199780-m03)   
	I1009 19:12:43.638360   28654 main.go:141] libmachine: (ha-199780-m03)   </cpu>
	I1009 19:12:43.638375   28654 main.go:141] libmachine: (ha-199780-m03)   <os>
	I1009 19:12:43.638386   28654 main.go:141] libmachine: (ha-199780-m03)     <type>hvm</type>
	I1009 19:12:43.638397   28654 main.go:141] libmachine: (ha-199780-m03)     <boot dev='cdrom'/>
	I1009 19:12:43.638406   28654 main.go:141] libmachine: (ha-199780-m03)     <boot dev='hd'/>
	I1009 19:12:43.638416   28654 main.go:141] libmachine: (ha-199780-m03)     <bootmenu enable='no'/>
	I1009 19:12:43.638425   28654 main.go:141] libmachine: (ha-199780-m03)   </os>
	I1009 19:12:43.638435   28654 main.go:141] libmachine: (ha-199780-m03)   <devices>
	I1009 19:12:43.638451   28654 main.go:141] libmachine: (ha-199780-m03)     <disk type='file' device='cdrom'>
	I1009 19:12:43.638468   28654 main.go:141] libmachine: (ha-199780-m03)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/boot2docker.iso'/>
	I1009 19:12:43.638480   28654 main.go:141] libmachine: (ha-199780-m03)       <target dev='hdc' bus='scsi'/>
	I1009 19:12:43.638491   28654 main.go:141] libmachine: (ha-199780-m03)       <readonly/>
	I1009 19:12:43.638498   28654 main.go:141] libmachine: (ha-199780-m03)     </disk>
	I1009 19:12:43.638511   28654 main.go:141] libmachine: (ha-199780-m03)     <disk type='file' device='disk'>
	I1009 19:12:43.638529   28654 main.go:141] libmachine: (ha-199780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:12:43.638545   28654 main.go:141] libmachine: (ha-199780-m03)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/ha-199780-m03.rawdisk'/>
	I1009 19:12:43.638557   28654 main.go:141] libmachine: (ha-199780-m03)       <target dev='hda' bus='virtio'/>
	I1009 19:12:43.638566   28654 main.go:141] libmachine: (ha-199780-m03)     </disk>
	I1009 19:12:43.638575   28654 main.go:141] libmachine: (ha-199780-m03)     <interface type='network'>
	I1009 19:12:43.638585   28654 main.go:141] libmachine: (ha-199780-m03)       <source network='mk-ha-199780'/>
	I1009 19:12:43.638600   28654 main.go:141] libmachine: (ha-199780-m03)       <model type='virtio'/>
	I1009 19:12:43.638613   28654 main.go:141] libmachine: (ha-199780-m03)     </interface>
	I1009 19:12:43.638624   28654 main.go:141] libmachine: (ha-199780-m03)     <interface type='network'>
	I1009 19:12:43.638637   28654 main.go:141] libmachine: (ha-199780-m03)       <source network='default'/>
	I1009 19:12:43.638647   28654 main.go:141] libmachine: (ha-199780-m03)       <model type='virtio'/>
	I1009 19:12:43.638658   28654 main.go:141] libmachine: (ha-199780-m03)     </interface>
	I1009 19:12:43.638665   28654 main.go:141] libmachine: (ha-199780-m03)     <serial type='pty'>
	I1009 19:12:43.638685   28654 main.go:141] libmachine: (ha-199780-m03)       <target port='0'/>
	I1009 19:12:43.638701   28654 main.go:141] libmachine: (ha-199780-m03)     </serial>
	I1009 19:12:43.638713   28654 main.go:141] libmachine: (ha-199780-m03)     <console type='pty'>
	I1009 19:12:43.638724   28654 main.go:141] libmachine: (ha-199780-m03)       <target type='serial' port='0'/>
	I1009 19:12:43.638734   28654 main.go:141] libmachine: (ha-199780-m03)     </console>
	I1009 19:12:43.638742   28654 main.go:141] libmachine: (ha-199780-m03)     <rng model='virtio'>
	I1009 19:12:43.638760   28654 main.go:141] libmachine: (ha-199780-m03)       <backend model='random'>/dev/random</backend>
	I1009 19:12:43.638775   28654 main.go:141] libmachine: (ha-199780-m03)     </rng>
	I1009 19:12:43.638786   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638796   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638812   28654 main.go:141] libmachine: (ha-199780-m03)   </devices>
	I1009 19:12:43.638828   28654 main.go:141] libmachine: (ha-199780-m03) </domain>
	I1009 19:12:43.638836   28654 main.go:141] libmachine: (ha-199780-m03) 
	I1009 19:12:43.645429   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:1f:d1:3b in network default
	I1009 19:12:43.645983   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:43.646001   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring networks are active...
	I1009 19:12:43.646747   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring network default is active
	I1009 19:12:43.647149   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring network mk-ha-199780 is active
	I1009 19:12:43.647523   28654 main.go:141] libmachine: (ha-199780-m03) Getting domain xml...
	I1009 19:12:43.648287   28654 main.go:141] libmachine: (ha-199780-m03) Creating domain...
	I1009 19:12:44.847549   28654 main.go:141] libmachine: (ha-199780-m03) Waiting to get IP...
	I1009 19:12:44.848392   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:44.848787   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:44.848829   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:44.848770   29426 retry.go:31] will retry after 229.997293ms: waiting for machine to come up
	I1009 19:12:45.079971   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.080455   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.080486   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.080421   29426 retry.go:31] will retry after 304.992826ms: waiting for machine to come up
	I1009 19:12:45.386902   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.387362   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.387386   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.387322   29426 retry.go:31] will retry after 327.958718ms: waiting for machine to come up
	I1009 19:12:45.716733   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.717214   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.717239   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.717174   29426 retry.go:31] will retry after 508.576077ms: waiting for machine to come up
	I1009 19:12:46.227904   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:46.228327   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:46.228353   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:46.228287   29426 retry.go:31] will retry after 585.555609ms: waiting for machine to come up
	I1009 19:12:46.814896   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:46.815296   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:46.815326   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:46.815257   29426 retry.go:31] will retry after 940.877771ms: waiting for machine to come up
	I1009 19:12:47.757334   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:47.757733   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:47.757767   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:47.757680   29426 retry.go:31] will retry after 1.078987913s: waiting for machine to come up
	I1009 19:12:48.838156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:48.838584   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:48.838612   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:48.838534   29426 retry.go:31] will retry after 1.204337562s: waiting for machine to come up
	I1009 19:12:50.044036   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:50.044425   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:50.044447   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:50.044387   29426 retry.go:31] will retry after 1.424565558s: waiting for machine to come up
	I1009 19:12:51.470825   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:51.471291   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:51.471328   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:51.471250   29426 retry.go:31] will retry after 1.95975676s: waiting for machine to come up
	I1009 19:12:53.432604   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:53.433116   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:53.433142   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:53.433070   29426 retry.go:31] will retry after 2.780245822s: waiting for machine to come up
	I1009 19:12:56.216025   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:56.216374   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:56.216395   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:56.216337   29426 retry.go:31] will retry after 3.28653641s: waiting for machine to come up
	I1009 19:12:59.504791   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:59.505156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:59.505184   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:59.505128   29426 retry.go:31] will retry after 4.186849932s: waiting for machine to come up
	I1009 19:13:03.693337   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:03.693747   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:13:03.693770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:13:03.693703   29426 retry.go:31] will retry after 5.146937605s: waiting for machine to come up
	I1009 19:13:08.842460   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.842868   28654 main.go:141] libmachine: (ha-199780-m03) Found IP for machine: 192.168.39.84
	I1009 19:13:08.842887   28654 main.go:141] libmachine: (ha-199780-m03) Reserving static IP address...
	I1009 19:13:08.842896   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.843320   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find host DHCP lease matching {name: "ha-199780-m03", mac: "52:54:00:15:92:44", ip: "192.168.39.84"} in network mk-ha-199780
	I1009 19:13:08.913543   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Getting to WaitForSSH function...
	I1009 19:13:08.913573   28654 main.go:141] libmachine: (ha-199780-m03) Reserved static IP address: 192.168.39.84
	I1009 19:13:08.913586   28654 main.go:141] libmachine: (ha-199780-m03) Waiting for SSH to be available...
	I1009 19:13:08.916270   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.916658   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:92:44}
	I1009 19:13:08.916682   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.916805   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using SSH client type: external
	I1009 19:13:08.916829   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa (-rw-------)
	I1009 19:13:08.916873   28654 main.go:141] libmachine: (ha-199780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:13:08.916898   28654 main.go:141] libmachine: (ha-199780-m03) DBG | About to run SSH command:
	I1009 19:13:08.916914   28654 main.go:141] libmachine: (ha-199780-m03) DBG | exit 0
	I1009 19:13:09.046941   28654 main.go:141] libmachine: (ha-199780-m03) DBG | SSH cmd err, output: <nil>: 
	I1009 19:13:09.047218   28654 main.go:141] libmachine: (ha-199780-m03) KVM machine creation complete!
	I1009 19:13:09.047540   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:13:09.048076   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:09.048290   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:09.048435   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:13:09.048449   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetState
	I1009 19:13:09.049768   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:13:09.049784   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:13:09.049792   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:13:09.049800   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.051899   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.052232   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.052256   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.052390   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.052558   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.052690   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.052792   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.052919   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.053134   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.053146   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:13:09.162161   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:13:09.162193   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:13:09.162204   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.165282   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.165740   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.165770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.165998   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.166189   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.166372   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.166511   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.166658   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.166820   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.166830   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:13:09.279803   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:13:09.279876   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:13:09.279888   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:13:09.279896   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.280130   28654 buildroot.go:166] provisioning hostname "ha-199780-m03"
	I1009 19:13:09.280155   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.280355   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.282543   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.282879   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.282903   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.283031   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.283188   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.283335   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.283479   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.283637   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.283800   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.283813   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780-m03 && echo "ha-199780-m03" | sudo tee /etc/hostname
	I1009 19:13:09.410249   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780-m03
	
	I1009 19:13:09.410286   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.413156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.413573   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.413597   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.413831   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.414036   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.414189   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.414350   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.414484   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.414653   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.414676   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:13:09.536419   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:13:09.536443   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:13:09.536456   28654 buildroot.go:174] setting up certificates
	I1009 19:13:09.536466   28654 provision.go:84] configureAuth start
	I1009 19:13:09.536474   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.536766   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:09.539383   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.539742   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.539769   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.539905   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.542068   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.542398   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.542433   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.542583   28654 provision.go:143] copyHostCerts
	I1009 19:13:09.542606   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:13:09.542633   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:13:09.542642   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:13:09.542706   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:13:09.542776   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:13:09.542794   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:13:09.542798   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:13:09.542825   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:13:09.542870   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:13:09.542886   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:13:09.542891   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:13:09.542910   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:13:09.542956   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780-m03 san=[127.0.0.1 192.168.39.84 ha-199780-m03 localhost minikube]
	I1009 19:13:09.606712   28654 provision.go:177] copyRemoteCerts
	I1009 19:13:09.606761   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:13:09.606781   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.609303   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.609661   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.609689   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.609868   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.610022   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.610145   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.610298   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:09.696779   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:13:09.696841   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:13:09.720751   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:13:09.720811   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:13:09.744059   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:13:09.744114   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:13:09.767833   28654 provision.go:87] duration metric: took 231.356763ms to configureAuth
	I1009 19:13:09.767867   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:13:09.768111   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:09.768195   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.770602   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.770927   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.770956   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.771124   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.771314   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.771473   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.771621   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.771780   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.771973   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.772002   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:13:09.999632   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:13:09.999662   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:13:09.999673   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetURL
	I1009 19:13:10.001043   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using libvirt version 6000000
	I1009 19:13:10.002982   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.003339   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.003364   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.003485   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:13:10.003499   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:13:10.003506   28654 client.go:171] duration metric: took 26.786200346s to LocalClient.Create
	I1009 19:13:10.003528   28654 start.go:167] duration metric: took 26.786259048s to libmachine.API.Create "ha-199780"
	I1009 19:13:10.003541   28654 start.go:293] postStartSetup for "ha-199780-m03" (driver="kvm2")
	I1009 19:13:10.003557   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:13:10.003580   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.003751   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:13:10.003777   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.005954   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.006305   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.006342   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.006472   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.006621   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.006781   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.006914   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.097042   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:13:10.101538   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:13:10.101559   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:13:10.101628   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:13:10.101716   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:13:10.101727   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:13:10.101831   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:13:10.111544   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:13:10.138321   28654 start.go:296] duration metric: took 134.764482ms for postStartSetup
	I1009 19:13:10.138362   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:13:10.138886   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:10.141464   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.141752   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.141798   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.142045   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:13:10.142239   28654 start.go:128] duration metric: took 26.94338984s to createHost
	I1009 19:13:10.142260   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.144573   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.144860   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.144895   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.145048   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.145233   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.145397   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.145561   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.145727   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:10.145915   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:10.145928   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:13:10.259958   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501190.239755663
	
	I1009 19:13:10.259981   28654 fix.go:216] guest clock: 1728501190.239755663
	I1009 19:13:10.259990   28654 fix.go:229] Guest: 2024-10-09 19:13:10.239755663 +0000 UTC Remote: 2024-10-09 19:13:10.142249873 +0000 UTC m=+147.747443556 (delta=97.50579ms)
	I1009 19:13:10.260009   28654 fix.go:200] guest clock delta is within tolerance: 97.50579ms
	I1009 19:13:10.260014   28654 start.go:83] releasing machines lock for "ha-199780-m03", held for 27.061310572s
	I1009 19:13:10.260031   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.260248   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:10.262692   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.263042   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.263090   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.265368   28654 out.go:177] * Found network options:
	I1009 19:13:10.266603   28654 out.go:177]   - NO_PROXY=192.168.39.114,192.168.39.83
	W1009 19:13:10.267719   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 19:13:10.267740   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:13:10.267752   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268176   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268354   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268457   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:13:10.268495   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	W1009 19:13:10.268522   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 19:13:10.268539   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:13:10.268607   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:13:10.268629   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.271001   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271378   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.271413   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271433   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271563   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.271675   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.271760   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.271841   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.271883   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.271905   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.272050   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.272201   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.272349   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.272499   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.509806   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:13:10.515665   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:13:10.515723   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:13:10.534296   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:13:10.534319   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:13:10.534372   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:13:10.550041   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:13:10.563633   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:13:10.563683   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:13:10.577637   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:13:10.592588   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:13:10.712305   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:13:10.879292   28654 docker.go:233] disabling docker service ...
	I1009 19:13:10.879381   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:13:10.894134   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:13:10.907059   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:13:11.025068   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:13:11.146057   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:13:11.160573   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:13:11.181994   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:13:11.182045   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.191765   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:13:11.191812   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.201883   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.212073   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.222390   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:13:11.232857   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.243298   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.262217   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.272906   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:13:11.282747   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:13:11.282797   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:13:11.296609   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:13:11.306096   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:11.423441   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:13:11.515740   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:13:11.515821   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:13:11.520647   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:13:11.520700   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:13:11.524288   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:13:11.564050   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:13:11.564119   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:13:11.592463   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:13:11.620536   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:13:11.622484   28654 out.go:177]   - env NO_PROXY=192.168.39.114
	I1009 19:13:11.623769   28654 out.go:177]   - env NO_PROXY=192.168.39.114,192.168.39.83
	I1009 19:13:11.624794   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:11.627494   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:11.627836   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:11.627861   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:11.628050   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:13:11.632057   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:13:11.644307   28654 mustload.go:65] Loading cluster: ha-199780
	I1009 19:13:11.644526   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:11.644823   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:11.644864   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:11.660098   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I1009 19:13:11.660500   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:11.660929   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:11.660963   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:11.661312   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:11.661490   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:13:11.662965   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:13:11.663268   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:11.663304   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:11.677584   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I1009 19:13:11.678002   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:11.678412   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:11.678433   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:11.678716   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:11.678874   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:13:11.678992   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.84
	I1009 19:13:11.679002   28654 certs.go:194] generating shared ca certs ...
	I1009 19:13:11.679014   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.679142   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:13:11.679180   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:13:11.679190   28654 certs.go:256] generating profile certs ...
	I1009 19:13:11.679253   28654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:13:11.679275   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8
	I1009 19:13:11.679293   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.84 192.168.39.254]
	I1009 19:13:11.751003   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 ...
	I1009 19:13:11.751029   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8: {Name:mkf155e8357b65010528843e053f2a71f20ad105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.751190   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8 ...
	I1009 19:13:11.751202   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8: {Name:mk6ff6d5eec7167bd850e69dc06edb50691eb6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.751267   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:13:11.751393   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:13:11.751509   28654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:13:11.751523   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:13:11.751535   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:13:11.751550   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:13:11.751563   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:13:11.751576   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:13:11.751588   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:13:11.751600   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:13:11.771159   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:13:11.771229   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:13:11.771259   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:13:11.771269   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:13:11.771293   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:13:11.771314   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:13:11.771335   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:13:11.771370   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:13:11.771395   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:13:11.771408   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:11.771420   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:13:11.771451   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:13:11.774438   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:11.774845   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:13:11.774865   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:11.775017   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:13:11.775204   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:13:11.775350   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:13:11.775478   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:13:11.851359   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:13:11.856664   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:13:11.868123   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:13:11.875260   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:13:11.887341   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:13:11.891724   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:13:11.902332   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:13:11.906621   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1009 19:13:11.916908   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:13:11.921562   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:13:11.931584   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:13:11.935971   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1009 19:13:11.946941   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:13:11.972757   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:13:11.996080   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:13:12.019624   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:13:12.042711   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1009 19:13:12.067239   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:13:12.094118   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:13:12.120234   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:13:12.143055   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:13:12.165868   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:13:12.188853   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:13:12.211293   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:13:12.227623   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:13:12.243623   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:13:12.260811   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1009 19:13:12.278131   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:13:12.295237   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1009 19:13:12.312441   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:13:12.328516   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:13:12.334428   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:13:12.345201   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.349589   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.349627   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.355741   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:13:12.366097   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:13:12.376756   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.381423   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.381474   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.387265   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:13:12.398550   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:13:12.410065   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.414879   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.414939   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.420521   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:13:12.431459   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:13:12.435599   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:13:12.435653   28654 kubeadm.go:934] updating node {m03 192.168.39.84 8443 v1.31.1 crio true true} ...
	I1009 19:13:12.435745   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:13:12.435776   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:13:12.435816   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:13:12.450815   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:13:12.450880   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:13:12.450927   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:13:12.462732   28654 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1009 19:13:12.462797   28654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1009 19:13:12.473333   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1009 19:13:12.473358   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:13:12.473356   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1009 19:13:12.473375   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:13:12.473392   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1009 19:13:12.473419   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:13:12.473431   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:13:12.473433   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:13:12.484568   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1009 19:13:12.484600   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1009 19:13:12.496090   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:13:12.496156   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1009 19:13:12.496169   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:13:12.496179   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1009 19:13:12.547231   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1009 19:13:12.547271   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1009 19:13:13.298298   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:13:13.308347   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 19:13:13.325500   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:13:13.341701   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:13:13.358009   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:13:13.361852   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:13:13.374963   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:13.498686   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:13:13.518977   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:13:13.519473   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:13.519531   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:13.538200   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I1009 19:13:13.538624   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:13.539117   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:13.539147   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:13.539481   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:13.539662   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:13:13.539788   28654 start.go:317] joinCluster: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:13:13.539943   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 19:13:13.539967   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:13:13.542836   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:13.543274   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:13:13.543303   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:13.543418   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:13:13.543577   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:13:13.543722   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:13:13.543861   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:13:13.700075   28654 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:13:13.700122   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d41j1t.hzhz2w4cpck4u6sv --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I1009 19:13:36.009706   28654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d41j1t.hzhz2w4cpck4u6sv --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (22.309560416s)
	I1009 19:13:36.009741   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 19:13:36.574647   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780-m03 minikube.k8s.io/updated_at=2024_10_09T19_13_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=false
	I1009 19:13:36.718344   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-199780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1009 19:13:36.828582   28654 start.go:319] duration metric: took 23.288789983s to joinCluster
	I1009 19:13:36.828663   28654 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:13:36.828971   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:36.830104   28654 out.go:177] * Verifying Kubernetes components...
	I1009 19:13:36.831350   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:37.149519   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:13:37.192508   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:13:37.192892   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:13:37.192972   28654 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I1009 19:13:37.193248   28654 node_ready.go:35] waiting up to 6m0s for node "ha-199780-m03" to be "Ready" ...
	I1009 19:13:37.193328   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:37.193338   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:37.193350   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:37.193359   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:37.197001   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:37.693747   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:37.693768   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:37.693780   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:37.693785   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:37.697648   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:38.193891   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:38.193913   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:38.193924   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:38.193929   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:38.197274   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:38.693429   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:38.693457   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:38.693469   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:38.693474   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:38.696864   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:39.193488   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:39.193508   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:39.193514   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:39.193519   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:39.196227   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:39.196768   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:39.694269   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:39.694294   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:39.694306   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:39.694313   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:39.697293   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:40.193909   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:40.193938   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:40.193948   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:40.193953   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:40.197226   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:40.693770   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:40.693793   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:40.693804   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:40.693809   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:40.697070   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:41.194260   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:41.194282   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:41.194291   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:41.194295   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:41.197138   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:41.197715   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:41.694049   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:41.694075   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:41.694087   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:41.694094   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:41.697134   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:42.194287   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:42.194311   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:42.194321   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:42.194327   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:42.197589   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:42.693552   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:42.693571   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:42.693581   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:42.693588   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:42.696963   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:43.193761   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:43.193786   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:43.193798   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:43.193806   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:43.197438   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:43.198158   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:43.693694   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:43.693716   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:43.693724   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:43.693728   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:43.697267   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:44.193683   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:44.193704   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:44.193711   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:44.193715   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:44.197056   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:44.693897   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:44.693918   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:44.693928   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:44.693933   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:44.696914   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:45.193775   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:45.193795   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:45.193803   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:45.193807   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:45.197164   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:45.694421   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:45.694444   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:45.694455   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:45.694461   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:45.697506   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:45.698052   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:46.193428   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:46.193455   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:46.193486   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:46.193492   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:46.197151   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:46.693979   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:46.693997   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:46.694013   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:46.694017   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:46.697611   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:47.193578   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:47.193600   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:47.193607   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:47.193611   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:47.197105   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:47.693781   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:47.693802   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:47.693813   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:47.693817   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:47.696934   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:48.194335   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:48.194358   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:48.194365   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:48.194368   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:48.198434   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:13:48.199180   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:48.693737   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:48.693758   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:48.693768   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:48.693773   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:48.697344   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:49.193432   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:49.193451   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:49.193459   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:49.193463   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:49.196304   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:49.694364   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:49.694385   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:49.694396   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:49.694403   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:49.697486   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.193397   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:50.193418   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:50.193431   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:50.193435   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:50.197076   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.693831   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:50.693856   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:50.693867   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:50.693873   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:50.697369   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.698284   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:51.194258   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:51.194282   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:51.194289   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:51.194294   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:51.197449   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:51.694317   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:51.694339   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:51.694350   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:51.694356   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:51.698049   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:52.194018   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:52.194043   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:52.194052   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:52.194061   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:52.197494   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:52.694202   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:52.694224   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:52.694232   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:52.694236   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:52.697227   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:53.193702   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:53.193722   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:53.193729   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:53.193733   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:53.196923   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:53.197555   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:53.694135   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:53.694158   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:53.694166   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:53.694172   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:53.697390   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:54.193409   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:54.193427   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.193439   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.193443   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.195968   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.693832   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:54.693853   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.693861   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.693866   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.696718   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.697386   28654 node_ready.go:49] node "ha-199780-m03" has status "Ready":"True"
	I1009 19:13:54.697405   28654 node_ready.go:38] duration metric: took 17.504141075s for node "ha-199780-m03" to be "Ready" ...
	I1009 19:13:54.697413   28654 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:13:54.697463   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:13:54.697471   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.697479   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.697484   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.703461   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:13:54.710054   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.710118   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-r8lg7
	I1009 19:13:54.710126   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.710133   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.710136   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.712863   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.713585   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.713602   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.713609   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.713613   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.715857   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.716501   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.716519   28654 pod_ready.go:82] duration metric: took 6.443501ms for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.716529   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.716578   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v5k75
	I1009 19:13:54.716586   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.716593   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.716599   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.718834   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.719475   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.719490   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.719499   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.719505   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.721592   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.722022   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.722036   28654 pod_ready.go:82] duration metric: took 5.49901ms for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.722045   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.722092   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780
	I1009 19:13:54.722102   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.722111   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.722117   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.724132   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.724537   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.724549   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.724558   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.724564   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.726416   28654 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 19:13:54.726760   28654 pod_ready.go:93] pod "etcd-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.726774   28654 pod_ready.go:82] duration metric: took 4.721439ms for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.726783   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.726829   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m02
	I1009 19:13:54.726838   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.726847   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.726853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.728868   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.729481   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:54.729499   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.729510   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.729515   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.731574   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.732095   28654 pod_ready.go:93] pod "etcd-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.732112   28654 pod_ready.go:82] duration metric: took 5.322203ms for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.732123   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.894472   28654 request.go:632] Waited for 162.298544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m03
	I1009 19:13:54.894602   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m03
	I1009 19:13:54.894612   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.894619   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.894623   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.897741   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.094188   28654 request.go:632] Waited for 195.683908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:55.094240   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:55.094246   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.094253   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.094258   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.097407   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.098074   28654 pod_ready.go:93] pod "etcd-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.098090   28654 pod_ready.go:82] duration metric: took 365.959261ms for pod "etcd-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.098111   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.294211   28654 request.go:632] Waited for 196.026886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:13:55.294264   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:13:55.294270   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.294277   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.294281   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.297814   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.494347   28654 request.go:632] Waited for 195.288987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:55.494396   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:55.494400   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.494409   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.494414   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.497640   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.498264   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.498282   28654 pod_ready.go:82] duration metric: took 400.159789ms for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.498295   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.694371   28654 request.go:632] Waited for 196.007868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:13:55.694438   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:13:55.694444   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.694452   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.694457   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.697453   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:55.894821   28654 request.go:632] Waited for 196.365606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:55.894877   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:55.894894   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.894903   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.894908   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.898105   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.898641   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.898656   28654 pod_ready.go:82] duration metric: took 400.354565ms for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.898665   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.094875   28654 request.go:632] Waited for 196.142376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m03
	I1009 19:13:56.094943   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m03
	I1009 19:13:56.094953   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.094962   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.094969   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.098488   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.294812   28654 request.go:632] Waited for 195.339632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:56.294879   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:56.294886   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.294897   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.294905   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.298371   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.299243   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:56.299268   28654 pod_ready.go:82] duration metric: took 400.59742ms for pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.299278   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.494432   28654 request.go:632] Waited for 195.083743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:13:56.494487   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:13:56.494493   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.494503   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.494508   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.498203   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.694515   28654 request.go:632] Waited for 195.651266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:56.694569   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:56.694574   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.694582   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.694589   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.697903   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.698503   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:56.698524   28654 pod_ready.go:82] duration metric: took 399.235411ms for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.698534   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.894604   28654 request.go:632] Waited for 196.010295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:13:56.894690   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:13:56.894699   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.894709   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.894725   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.897698   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:57.094771   28654 request.go:632] Waited for 196.347164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:57.094830   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:57.094837   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.094846   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.094853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.097915   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.098466   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.098483   28654 pod_ready.go:82] duration metric: took 399.942607ms for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.098496   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.294694   28654 request.go:632] Waited for 196.107304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m03
	I1009 19:13:57.294760   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m03
	I1009 19:13:57.294768   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.294778   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.294791   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.298281   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.493859   28654 request.go:632] Waited for 194.862003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.493928   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.493933   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.493941   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.493945   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.497771   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.498530   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.498546   28654 pod_ready.go:82] duration metric: took 400.036948ms for pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.498556   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cltcd" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.694138   28654 request.go:632] Waited for 195.506846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cltcd
	I1009 19:13:57.694198   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cltcd
	I1009 19:13:57.694204   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.694211   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.694217   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.698240   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:13:57.894301   28654 request.go:632] Waited for 195.370676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.894370   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.894377   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.894391   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.894398   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.897846   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.898728   28654 pod_ready.go:93] pod "kube-proxy-cltcd" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.898745   28654 pod_ready.go:82] duration metric: took 400.184495ms for pod "kube-proxy-cltcd" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.898756   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.094244   28654 request.go:632] Waited for 195.417272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:13:58.094320   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:13:58.094332   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.094339   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.094343   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.098070   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.294156   28654 request.go:632] Waited for 195.371857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:58.294219   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:58.294226   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.294237   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.294245   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.297391   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.297856   28654 pod_ready.go:93] pod "kube-proxy-n8ffq" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:58.297872   28654 pod_ready.go:82] duration metric: took 399.106499ms for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.297884   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.493870   28654 request.go:632] Waited for 195.913549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:13:58.493922   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:13:58.493927   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.493937   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.493944   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.497117   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.694489   28654 request.go:632] Waited for 196.566825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:58.694545   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:58.694552   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.694563   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.694568   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.697679   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.698297   28654 pod_ready.go:93] pod "kube-proxy-zfsq8" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:58.698312   28654 pod_ready.go:82] duration metric: took 400.419475ms for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.698322   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.894499   28654 request.go:632] Waited for 196.088891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:13:58.894585   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:13:58.894592   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.894603   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.894613   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.897964   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.094228   28654 request.go:632] Waited for 195.366071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:59.094310   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:59.094322   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.094333   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.094342   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.097557   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.098186   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.098207   28654 pod_ready.go:82] duration metric: took 399.878488ms for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.098219   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.294278   28654 request.go:632] Waited for 195.983419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:13:59.294332   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:13:59.294338   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.294345   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.294350   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.297821   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.493975   28654 request.go:632] Waited for 195.208037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:59.494031   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:59.494036   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.494044   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.494049   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.501563   28654 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 19:13:59.502080   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.502097   28654 pod_ready.go:82] duration metric: took 403.868133ms for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.502106   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.694192   28654 request.go:632] Waited for 192.028751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m03
	I1009 19:13:59.694247   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m03
	I1009 19:13:59.694253   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.694260   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.694264   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.697180   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:59.894169   28654 request.go:632] Waited for 196.350026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:59.894218   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:59.894223   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.894230   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.894235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.897240   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:59.897806   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.897823   28654 pod_ready.go:82] duration metric: took 395.71123ms for pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.897835   28654 pod_ready.go:39] duration metric: took 5.200413633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:13:59.897849   28654 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:13:59.897900   28654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:59.914617   28654 api_server.go:72] duration metric: took 23.08591673s to wait for apiserver process to appear ...
	I1009 19:13:59.914639   28654 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:13:59.914655   28654 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1009 19:13:59.918628   28654 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1009 19:13:59.918679   28654 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I1009 19:13:59.918686   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.918696   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.918706   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.919571   28654 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1009 19:13:59.919687   28654 api_server.go:141] control plane version: v1.31.1
	I1009 19:13:59.919708   28654 api_server.go:131] duration metric: took 5.063855ms to wait for apiserver health ...
	I1009 19:13:59.919716   28654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:14:00.094827   28654 request.go:632] Waited for 175.023163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.094896   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.094904   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.094915   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.094925   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.100594   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:14:00.107658   28654 system_pods.go:59] 24 kube-system pods found
	I1009 19:14:00.107684   28654 system_pods.go:61] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:14:00.107689   28654 system_pods.go:61] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:14:00.107692   28654 system_pods.go:61] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:14:00.107695   28654 system_pods.go:61] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:14:00.107699   28654 system_pods.go:61] "etcd-ha-199780-m03" [b174991f-8f2c-44b1-8646-5ee7533a9a67] Running
	I1009 19:14:00.107702   28654 system_pods.go:61] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:14:00.107706   28654 system_pods.go:61] "kindnet-b8ff2" [6f00c0c0-aae5-4bfa-aa9f-30f7c79fa343] Running
	I1009 19:14:00.107711   28654 system_pods.go:61] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:14:00.107716   28654 system_pods.go:61] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:14:00.107721   28654 system_pods.go:61] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:14:00.107725   28654 system_pods.go:61] "kube-apiserver-ha-199780-m03" [9d8e77cd-db60-4329-91ea-012315e5beaf] Running
	I1009 19:14:00.107733   28654 system_pods.go:61] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:14:00.107738   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:14:00.107747   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m03" [36743602-a58a-4171-88c2-9a79af012f26] Running
	I1009 19:14:00.107754   28654 system_pods.go:61] "kube-proxy-cltcd" [1dad9ac4-00d4-497e-8d9b-3e20a5c35c10] Running
	I1009 19:14:00.107758   28654 system_pods.go:61] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:14:00.107765   28654 system_pods.go:61] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:14:00.107770   28654 system_pods.go:61] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:14:00.107777   28654 system_pods.go:61] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:14:00.107783   28654 system_pods.go:61] "kube-scheduler-ha-199780-m03" [b687bb2e-b6eb-4c17-9762-537dd28919ff] Running
	I1009 19:14:00.107790   28654 system_pods.go:61] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:14:00.107795   28654 system_pods.go:61] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:14:00.107802   28654 system_pods.go:61] "kube-vip-ha-199780-m03" [6fb0f84c-1cc8-4539-a879-4a8fe8bc3eb6] Running
	I1009 19:14:00.107808   28654 system_pods.go:61] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:14:00.107818   28654 system_pods.go:74] duration metric: took 188.095908ms to wait for pod list to return data ...
	I1009 19:14:00.107830   28654 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:14:00.294248   28654 request.go:632] Waited for 186.335259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:14:00.294301   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:14:00.294308   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.294318   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.294323   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.298434   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:14:00.298601   28654 default_sa.go:45] found service account: "default"
	I1009 19:14:00.298618   28654 default_sa.go:55] duration metric: took 190.779244ms for default service account to be created ...
	I1009 19:14:00.298632   28654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:14:00.493990   28654 request.go:632] Waited for 195.280768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.494052   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.494059   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.494069   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.494077   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.499571   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:14:00.506443   28654 system_pods.go:86] 24 kube-system pods found
	I1009 19:14:00.506469   28654 system_pods.go:89] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:14:00.506474   28654 system_pods.go:89] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:14:00.506478   28654 system_pods.go:89] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:14:00.506482   28654 system_pods.go:89] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:14:00.506486   28654 system_pods.go:89] "etcd-ha-199780-m03" [b174991f-8f2c-44b1-8646-5ee7533a9a67] Running
	I1009 19:14:00.506490   28654 system_pods.go:89] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:14:00.506495   28654 system_pods.go:89] "kindnet-b8ff2" [6f00c0c0-aae5-4bfa-aa9f-30f7c79fa343] Running
	I1009 19:14:00.506503   28654 system_pods.go:89] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:14:00.506511   28654 system_pods.go:89] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:14:00.506518   28654 system_pods.go:89] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:14:00.506527   28654 system_pods.go:89] "kube-apiserver-ha-199780-m03" [9d8e77cd-db60-4329-91ea-012315e5beaf] Running
	I1009 19:14:00.506539   28654 system_pods.go:89] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:14:00.506548   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:14:00.506555   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m03" [36743602-a58a-4171-88c2-9a79af012f26] Running
	I1009 19:14:00.506558   28654 system_pods.go:89] "kube-proxy-cltcd" [1dad9ac4-00d4-497e-8d9b-3e20a5c35c10] Running
	I1009 19:14:00.506564   28654 system_pods.go:89] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:14:00.506569   28654 system_pods.go:89] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:14:00.506574   28654 system_pods.go:89] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:14:00.506580   28654 system_pods.go:89] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:14:00.506585   28654 system_pods.go:89] "kube-scheduler-ha-199780-m03" [b687bb2e-b6eb-4c17-9762-537dd28919ff] Running
	I1009 19:14:00.506590   28654 system_pods.go:89] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:14:00.506598   28654 system_pods.go:89] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:14:00.506602   28654 system_pods.go:89] "kube-vip-ha-199780-m03" [6fb0f84c-1cc8-4539-a879-4a8fe8bc3eb6] Running
	I1009 19:14:00.506610   28654 system_pods.go:89] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:14:00.506619   28654 system_pods.go:126] duration metric: took 207.977758ms to wait for k8s-apps to be running ...
	I1009 19:14:00.506632   28654 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:14:00.506681   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:14:00.521903   28654 system_svc.go:56] duration metric: took 15.266021ms WaitForService to wait for kubelet
	I1009 19:14:00.521926   28654 kubeadm.go:582] duration metric: took 23.693227633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:14:00.521941   28654 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:14:00.694326   28654 request.go:632] Waited for 172.306887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I1009 19:14:00.694392   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I1009 19:14:00.694398   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.694405   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.694409   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.698331   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:14:00.699548   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699566   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699577   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699581   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699584   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699587   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699591   28654 node_conditions.go:105] duration metric: took 177.645761ms to run NodePressure ...
	I1009 19:14:00.699601   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:14:00.699621   28654 start.go:255] writing updated cluster config ...
	I1009 19:14:00.699890   28654 ssh_runner.go:195] Run: rm -f paused
	I1009 19:14:00.750344   28654 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 19:14:00.752481   28654 out.go:177] * Done! kubectl is now configured to use "ha-199780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.648973973Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501471648949251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc7f56d4-bf6f-471d-a730-8c50edd10c6e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.649917525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e507f29-67e0-4b2c-a694-48ac572a91a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.649972470Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e507f29-67e0-4b2c-a694-48ac572a91a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.650204615Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e507f29-67e0-4b2c-a694-48ac572a91a2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.686069025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b74ee66f-11c3-4c5d-b072-f50bf1a67507 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.686141531Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b74ee66f-11c3-4c5d-b072-f50bf1a67507 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.687334627Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b65b365-0f7a-4438-80d2-042c3fdb3149 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.688008495Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501471687984712,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b65b365-0f7a-4438-80d2-042c3fdb3149 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.688728191Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7cbb601e-8505-4dda-be58-14f1b5d3398a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.688802078Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7cbb601e-8505-4dda-be58-14f1b5d3398a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.689061516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7cbb601e-8505-4dda-be58-14f1b5d3398a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.727598459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c5a50f41-cd85-4c09-9164-b5b723852499 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.727672900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c5a50f41-cd85-4c09-9164-b5b723852499 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.728946657Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a5730fb-5c3b-4b76-b804-8ce49f5e4b71 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.729383889Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501471729360335,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a5730fb-5c3b-4b76-b804-8ce49f5e4b71 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.730251204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ed305e02-fdec-474a-a28f-ed1b5a1b3fef name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.730302828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ed305e02-fdec-474a-a28f-ed1b5a1b3fef name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.730623019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ed305e02-fdec-474a-a28f-ed1b5a1b3fef name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.773513529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bace59b-9ba5-4a3d-9a40-ca713b20077e name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.773589165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bace59b-9ba5-4a3d-9a40-ca713b20077e name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.774556571Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7f8f00b-5ee6-470b-a1f3-fe95d4a8dfa6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.774986906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501471774963346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7f8f00b-5ee6-470b-a1f3-fe95d4a8dfa6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.775630252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=618f81fb-5df4-4dcf-95fc-fd4845d4d560 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.775680408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=618f81fb-5df4-4dcf-95fc-fd4845d4d560 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:51 ha-199780 crio[667]: time="2024-10-09 19:17:51.775897412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=618f81fb-5df4-4dcf-95fc-fd4845d4d560 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ea2f43f1a79f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4ee23da4cac60       busybox-7dff88458-9j59h
	22a50af75d092       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   085e585069bd9       coredns-7c65d6cfc9-r8lg7
	35a77197ba833       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   31a68dbf07563       coredns-7c65d6cfc9-v5k75
	ec6c52f12ef1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   fe10d9898f15c       storage-provisioner
	aa6f941b511ee       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   574f1065ffc92       kindnet-2gjpk
	e72e7a03ebf12       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   893da030028ba       kube-proxy-n8ffq
	5e66ef287f9b9       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   f43a5a99f755d       kube-vip-ha-199780
	297d9ba8730bd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   1c04b2a2ff60e       kube-apiserver-ha-199780
	88b0c31651177       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   7304e21bfd538       kube-controller-manager-ha-199780
	ce5525ec371c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   a31ef18f5a475       etcd-ha-199780
	02b6fe12544b4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4e472f9c0008c       kube-scheduler-ha-199780
	
	
	==> coredns [22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431] <==
	[INFO] 10.244.2.2:60800 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001355455s
	[INFO] 10.244.2.2:51592 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001524757s
	[INFO] 10.244.0.4:56643 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000117626s
	[INFO] 10.244.0.4:59083 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001918015s
	[INFO] 10.244.1.2:50050 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020734s
	[INFO] 10.244.1.2:42588 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154546s
	[INFO] 10.244.2.2:53843 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710102s
	[INFO] 10.244.2.2:41845 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146416s
	[INFO] 10.244.2.2:36609 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000234089s
	[INFO] 10.244.0.4:46267 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770158s
	[INFO] 10.244.0.4:50439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087554s
	[INFO] 10.244.0.4:34970 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127814s
	[INFO] 10.244.0.4:56896 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001173975s
	[INFO] 10.244.0.4:49966 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151676s
	[INFO] 10.244.1.2:42996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014083s
	[INFO] 10.244.1.2:44506 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088434s
	[INFO] 10.244.1.2:49086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070298s
	[INFO] 10.244.2.2:50808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197102s
	[INFO] 10.244.0.4:46671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019106s
	[INFO] 10.244.0.4:55369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070793s
	[INFO] 10.244.1.2:55579 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00053279s
	[INFO] 10.244.1.2:48281 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017096s
	[INFO] 10.244.2.2:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179419s
	[INFO] 10.244.2.2:37087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001697s
	[INFO] 10.244.0.4:45764 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105979s
	
	
	==> coredns [35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72] <==
	[INFO] 10.244.1.2:49567 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017247s
	[INFO] 10.244.1.2:46716 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012636722s
	[INFO] 10.244.1.2:55598 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179363s
	[INFO] 10.244.1.2:47319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137976s
	[INFO] 10.244.2.2:41489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184478s
	[INFO] 10.244.2.2:55951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000222614s
	[INFO] 10.244.2.2:48627 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015294s
	[INFO] 10.244.2.2:39644 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0012309s
	[INFO] 10.244.2.2:40477 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089525s
	[INFO] 10.244.0.4:43949 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131355s
	[INFO] 10.244.0.4:36372 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136676s
	[INFO] 10.244.0.4:46637 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067852s
	[INFO] 10.244.1.2:51170 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178464s
	[INFO] 10.244.2.2:34724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178092s
	[INFO] 10.244.2.2:51704 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113596s
	[INFO] 10.244.2.2:58856 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114468s
	[INFO] 10.244.0.4:46411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103548s
	[INFO] 10.244.0.4:56515 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097616s
	[INFO] 10.244.1.2:46439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144476s
	[INFO] 10.244.1.2:55946 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169556s
	[INFO] 10.244.2.2:59005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136307s
	[INFO] 10.244.2.2:36778 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074325s
	[INFO] 10.244.0.4:35520 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000216466s
	[INFO] 10.244.0.4:37146 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092067s
	[INFO] 10.244.0.4:38648 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006473s
	
	
	==> describe nodes <==
	Name:               ha-199780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T19_11_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:11:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    ha-199780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8b350a04d4e4876ae4d16443fff45f4
	  System UUID:                f8b350a0-4d4e-4876-ae4d-16443fff45f4
	  Boot ID:                    933ad8fe-c793-4abe-b675-8fc9d8bb0df7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9j59h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 coredns-7c65d6cfc9-r8lg7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 coredns-7c65d6cfc9-v5k75             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m25s
	  kube-system                 etcd-ha-199780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m29s
	  kube-system                 kindnet-2gjpk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m25s
	  kube-system                 kube-apiserver-ha-199780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-controller-manager-ha-199780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-proxy-n8ffq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-scheduler-ha-199780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-vip-ha-199780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m23s  kube-proxy       
	  Normal  Starting                 6m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m29s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m29s  kubelet          Node ha-199780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m29s  kubelet          Node ha-199780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m29s  kubelet          Node ha-199780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m26s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	  Normal  NodeReady                6m7s   kubelet          Node ha-199780 status is now: NodeReady
	  Normal  RegisteredNode           5m27s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	  Normal  RegisteredNode           4m10s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	
	
	Name:               ha-199780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_12_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:12:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:15:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    ha-199780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d9c79bf2f124101a095ed4ba0ce88eb
	  System UUID:                8d9c79bf-2f12-4101-a095-ed4ba0ce88eb
	  Boot ID:                    5dd46771-2617-4b89-b6af-8b5fb9f8968b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6v84n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-199780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m33s
	  kube-system                 kindnet-pwr8x                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m35s
	  kube-system                 kube-apiserver-ha-199780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-controller-manager-ha-199780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-proxy-zfsq8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-scheduler-ha-199780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-vip-ha-199780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m30s                  kube-proxy       
	  Normal  Starting                 5m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m35s (x8 over 5m35s)  kubelet          Node ha-199780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m35s (x8 over 5m35s)  kubelet          Node ha-199780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m35s (x7 over 5m35s)  kubelet          Node ha-199780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  NodeNotReady             2m                     node-controller  Node ha-199780-m02 status is now: NodeNotReady
	
	
	Name:               ha-199780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_13_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-199780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eebc1909fc264048999cb603a9af6ce3
	  System UUID:                eebc1909-fc26-4048-999c-b603a9af6ce3
	  Boot ID:                    b15e1b77-82c5-4af5-a3d4-20b2860c5033
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8946j                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 etcd-ha-199780-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kindnet-b8ff2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m19s
	  kube-system                 kube-apiserver-ha-199780-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-ha-199780-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-cltcd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-ha-199780-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-vip-ha-199780-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node ha-199780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node ha-199780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node ha-199780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	
	
	Name:               ha-199780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_14_39_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.124
	  Hostname:    ha-199780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 781e482090944bd998625225909c9e80
	  System UUID:                781e4820-9094-4bd9-9862-5225909c9e80
	  Boot ID:                    12a0f26b-3a10-4a3c-a52b-9cbc57a77f21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24ftv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m13s
	  kube-system                 kube-proxy-m4z2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m13s (x2 over 3m14s)  kubelet          Node ha-199780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m13s (x2 over 3m14s)  kubelet          Node ha-199780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m13s (x2 over 3m14s)  kubelet          Node ha-199780-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m12s                  node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-199780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050153] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040118] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.479681] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588103] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 9 19:11] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.067225] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062889] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.160511] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.147234] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.288221] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.950259] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +4.382176] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.060168] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.347615] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.082493] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.436773] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.719462] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 9 19:12] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef] <==
	{"level":"warn","ts":"2024-10-09T19:17:52.039547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.046452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.050501Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.060388Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.063079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.069599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.075870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.079890Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.082833Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.090184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.096203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.102531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.106132Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.109467Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.112575Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.115494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.115599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.124490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.147713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.164953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.175876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.181656Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:52.189962Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.83:2380/version","remote-member-id":"f466fee41a82c4a2","error":"Get \"https://192.168.39.83:2380/version\": dial tcp 192.168.39.83:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-09T19:17:52.190017Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f466fee41a82c4a2","error":"Get \"https://192.168.39.83:2380/version\": dial tcp 192.168.39.83:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-10-09T19:17:52.224351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:17:52 up 7 min,  0 users,  load average: 0.42, 0.37, 0.19
	Linux ha-199780 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff] <==
	I1009 19:17:15.107515       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:25.107513       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:25.107568       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:25.107889       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:25.107926       1 main.go:300] handling current node
	I1009 19:17:25.107945       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:25.107952       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:25.108091       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:25.108116       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:35.098534       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:35.098583       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:35.098861       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:35.098893       1 main.go:300] handling current node
	I1009 19:17:35.098905       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:35.098910       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:35.099056       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:35.099076       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:45.106531       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:45.106579       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:45.106833       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:45.106857       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:45.106999       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:45.107020       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:45.107136       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:45.107162       1 main.go:300] handling current node
	
	
	==> kube-apiserver [297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d] <==
	I1009 19:11:21.668889       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:11:21.770460       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:11:21.781866       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.114]
	I1009 19:11:21.782961       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 19:11:21.787948       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:11:22.068030       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 19:11:22.927751       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 19:11:22.944470       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:11:23.089040       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 19:11:27.267149       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1009 19:11:27.777277       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1009 19:14:07.172312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48556: use of closed network connection
	E1009 19:14:07.353387       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48566: use of closed network connection
	E1009 19:14:07.545234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48574: use of closed network connection
	E1009 19:14:07.734543       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48582: use of closed network connection
	E1009 19:14:07.929888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48590: use of closed network connection
	E1009 19:14:08.100628       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48610: use of closed network connection
	E1009 19:14:08.280738       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48618: use of closed network connection
	E1009 19:14:08.453709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48636: use of closed network connection
	E1009 19:14:08.625372       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48648: use of closed network connection
	E1009 19:14:08.913070       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48688: use of closed network connection
	E1009 19:14:09.077842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48702: use of closed network connection
	E1009 19:14:09.252280       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48730: use of closed network connection
	E1009 19:14:09.427983       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48748: use of closed network connection
	E1009 19:14:09.597172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48774: use of closed network connection
	
	
	==> kube-controller-manager [88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf] <==
	I1009 19:14:39.219907       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-199780-m04" podCIDRs=["10.244.3.0/24"]
	I1009 19:14:39.220731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.221061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.241490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.355995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.770947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:40.508613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:42.009348       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:42.009820       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-199780-m04"
	I1009 19:14:42.092487       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:43.021323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:43.490581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:49.589213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:59.213778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:59.213909       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-199780-m04"
	I1009 19:14:59.228331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:00.446970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:10.142919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:52.044073       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-199780-m04"
	I1009 19:15:52.044690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:52.073336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:52.197476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.479755ms"
	I1009 19:15:52.197580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.944µs"
	I1009 19:15:53.092490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:57.298894       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	
	
	==> kube-proxy [e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 19:11:28.707293       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 19:11:28.725677       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	E1009 19:11:28.725782       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:11:28.757070       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 19:11:28.757115       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:11:28.757143       1 server_linux.go:169] "Using iptables Proxier"
	I1009 19:11:28.759907       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:11:28.760502       1 server.go:483] "Version info" version="v1.31.1"
	I1009 19:11:28.760531       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:11:28.763071       1 config.go:199] "Starting service config controller"
	I1009 19:11:28.763270       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 19:11:28.763554       1 config.go:105] "Starting endpoint slice config controller"
	I1009 19:11:28.763583       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 19:11:28.764395       1 config.go:328] "Starting node config controller"
	I1009 19:11:28.764485       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 19:11:28.864003       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 19:11:28.864032       1 shared_informer.go:320] Caches are synced for service config
	I1009 19:11:28.864635       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f] <==
	W1009 19:11:21.020523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.020653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.034179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.034272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.151254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 19:11:21.151392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.213273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 19:11:21.213327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.215782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:11:21.217186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.224009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 19:11:21.224287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.233925       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 19:11:21.234510       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 19:11:21.254121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.254998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1009 19:11:24.360718       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 19:14:39.271772       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-v6wc7\": pod kube-proxy-v6wc7 is already assigned to node \"ha-199780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-v6wc7" node="ha-199780-m04"
	E1009 19:14:39.274796       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d0c6f382-7a34-4281-922e-ded9d878bec1(kube-system/kube-proxy-v6wc7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-v6wc7"
	E1009 19:14:39.274892       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v6wc7\": pod kube-proxy-v6wc7 is already assigned to node \"ha-199780-m04\"" pod="kube-system/kube-proxy-v6wc7"
	I1009 19:14:39.274974       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v6wc7" node="ha-199780-m04"
	E1009 19:14:39.274639       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24ftv\": pod kindnet-24ftv is already assigned to node \"ha-199780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-24ftv" node="ha-199780-m04"
	E1009 19:14:39.277781       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 67dc91f7-39c8-4a82-843c-629f28c633ce(kube-system/kindnet-24ftv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24ftv"
	E1009 19:14:39.277909       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24ftv\": pod kindnet-24ftv is already assigned to node \"ha-199780-m04\"" pod="kube-system/kindnet-24ftv"
	I1009 19:14:39.278018       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24ftv" node="ha-199780-m04"
	
	
	==> kubelet <==
	Oct 09 19:16:23 ha-199780 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:16:23 ha-199780 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:16:23 ha-199780 kubelet[1323]: E1009 19:16:23.169875    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501383169575669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:23 ha-199780 kubelet[1323]: E1009 19:16:23.169902    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501383169575669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:33 ha-199780 kubelet[1323]: E1009 19:16:33.171614    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501393171084690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:33 ha-199780 kubelet[1323]: E1009 19:16:33.171869    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501393171084690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:43 ha-199780 kubelet[1323]: E1009 19:16:43.174108    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501403173783019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:43 ha-199780 kubelet[1323]: E1009 19:16:43.174391    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501403173783019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:53 ha-199780 kubelet[1323]: E1009 19:16:53.177556    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501413177111466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:53 ha-199780 kubelet[1323]: E1009 19:16:53.177590    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501413177111466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:03 ha-199780 kubelet[1323]: E1009 19:17:03.179697    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501423179388594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:03 ha-199780 kubelet[1323]: E1009 19:17:03.179743    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501423179388594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:13 ha-199780 kubelet[1323]: E1009 19:17:13.181290    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501433180839998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:13 ha-199780 kubelet[1323]: E1009 19:17:13.181685    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501433180839998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.046503    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:17:23 ha-199780 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.183478    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501443183171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.183519    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501443183171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:33 ha-199780 kubelet[1323]: E1009 19:17:33.185325    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501453184930973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:33 ha-199780 kubelet[1323]: E1009 19:17:33.186043    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501453184930973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:43 ha-199780 kubelet[1323]: E1009 19:17:43.188281    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501463187979357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:43 ha-199780 kubelet[1323]: E1009 19:17:43.188327    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501463187979357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-199780 -n ha-199780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-199780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.023827357s)
ha_test.go:309: expected profile "ha-199780" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-199780\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-199780\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-199780\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.114\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.83\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.84\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.124\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"met
allb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":26
2144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-199780 -n ha-199780
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 logs -n 25: (1.366054417s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m03_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m04 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp testdata/cp-test.txt                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m04_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03:/home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m03 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-199780 node stop m02 -v=7                                                     | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-199780 node start m02 -v=7                                                    | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:10:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:10:42.430511   28654 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:10:42.430648   28654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:42.430657   28654 out.go:358] Setting ErrFile to fd 2...
	I1009 19:10:42.430662   28654 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:42.430823   28654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:10:42.431377   28654 out.go:352] Setting JSON to false
	I1009 19:10:42.432258   28654 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3183,"bootTime":1728497859,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:10:42.432357   28654 start.go:139] virtualization: kvm guest
	I1009 19:10:42.434444   28654 out.go:177] * [ha-199780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:10:42.435720   28654 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:10:42.435744   28654 notify.go:220] Checking for updates...
	I1009 19:10:42.438470   28654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:10:42.439771   28654 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:10:42.441201   28654 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:42.442550   28654 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:10:42.443839   28654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:10:42.445321   28654 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:10:42.478513   28654 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 19:10:42.479828   28654 start.go:297] selected driver: kvm2
	I1009 19:10:42.479841   28654 start.go:901] validating driver "kvm2" against <nil>
	I1009 19:10:42.479851   28654 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:10:42.480537   28654 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:42.480609   28654 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:10:42.494762   28654 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 19:10:42.494798   28654 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 19:10:42.495015   28654 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:10:42.495042   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:10:42.495103   28654 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1009 19:10:42.495115   28654 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 19:10:42.495160   28654 start.go:340] cluster config:
	{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:42.495268   28654 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:10:42.497127   28654 out.go:177] * Starting "ha-199780" primary control-plane node in "ha-199780" cluster
	I1009 19:10:42.498350   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:10:42.498375   28654 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:10:42.498383   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:10:42.498461   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:10:42.498474   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:10:42.498736   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:10:42.498755   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json: {Name:mkaa9f981fdc58b4cf67de89e14727a24139b9f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:10:42.498888   28654 start.go:360] acquireMachinesLock for ha-199780: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:10:42.498923   28654 start.go:364] duration metric: took 18.652µs to acquireMachinesLock for "ha-199780"
	I1009 19:10:42.498944   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:10:42.499008   28654 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 19:10:42.500613   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:10:42.500730   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:42.500770   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:42.514603   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I1009 19:10:42.515116   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:42.515617   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:10:42.515660   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:42.515950   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:42.516152   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:10:42.516283   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:10:42.516418   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:10:42.516447   28654 client.go:168] LocalClient.Create starting
	I1009 19:10:42.516482   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:10:42.516515   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:10:42.516531   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:10:42.516577   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:10:42.516599   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:10:42.516612   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:10:42.516640   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:10:42.516651   28654 main.go:141] libmachine: (ha-199780) Calling .PreCreateCheck
	I1009 19:10:42.516980   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:10:42.517335   28654 main.go:141] libmachine: Creating machine...
	I1009 19:10:42.517347   28654 main.go:141] libmachine: (ha-199780) Calling .Create
	I1009 19:10:42.517467   28654 main.go:141] libmachine: (ha-199780) Creating KVM machine...
	I1009 19:10:42.518611   28654 main.go:141] libmachine: (ha-199780) DBG | found existing default KVM network
	I1009 19:10:42.519307   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.519165   28677 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b90}
	I1009 19:10:42.519338   28654 main.go:141] libmachine: (ha-199780) DBG | created network xml: 
	I1009 19:10:42.519353   28654 main.go:141] libmachine: (ha-199780) DBG | <network>
	I1009 19:10:42.519365   28654 main.go:141] libmachine: (ha-199780) DBG |   <name>mk-ha-199780</name>
	I1009 19:10:42.519373   28654 main.go:141] libmachine: (ha-199780) DBG |   <dns enable='no'/>
	I1009 19:10:42.519380   28654 main.go:141] libmachine: (ha-199780) DBG |   
	I1009 19:10:42.519389   28654 main.go:141] libmachine: (ha-199780) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 19:10:42.519398   28654 main.go:141] libmachine: (ha-199780) DBG |     <dhcp>
	I1009 19:10:42.519408   28654 main.go:141] libmachine: (ha-199780) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 19:10:42.519416   28654 main.go:141] libmachine: (ha-199780) DBG |     </dhcp>
	I1009 19:10:42.519425   28654 main.go:141] libmachine: (ha-199780) DBG |   </ip>
	I1009 19:10:42.519432   28654 main.go:141] libmachine: (ha-199780) DBG |   
	I1009 19:10:42.519439   28654 main.go:141] libmachine: (ha-199780) DBG | </network>
	I1009 19:10:42.519448   28654 main.go:141] libmachine: (ha-199780) DBG | 
	I1009 19:10:42.523998   28654 main.go:141] libmachine: (ha-199780) DBG | trying to create private KVM network mk-ha-199780 192.168.39.0/24...
	I1009 19:10:42.584957   28654 main.go:141] libmachine: (ha-199780) DBG | private KVM network mk-ha-199780 192.168.39.0/24 created
	I1009 19:10:42.584984   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.584941   28677 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:42.584995   28654 main.go:141] libmachine: (ha-199780) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 ...
	I1009 19:10:42.585010   28654 main.go:141] libmachine: (ha-199780) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:10:42.585155   28654 main.go:141] libmachine: (ha-199780) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:10:42.845983   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:42.845854   28677 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa...
	I1009 19:10:43.100187   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:43.100062   28677 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/ha-199780.rawdisk...
	I1009 19:10:43.100216   28654 main.go:141] libmachine: (ha-199780) DBG | Writing magic tar header
	I1009 19:10:43.100229   28654 main.go:141] libmachine: (ha-199780) DBG | Writing SSH key tar header
	I1009 19:10:43.100242   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:43.100204   28677 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 ...
	I1009 19:10:43.100332   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780
	I1009 19:10:43.100355   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780 (perms=drwx------)
	I1009 19:10:43.100365   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:10:43.100376   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:10:43.100386   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:43.100399   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:10:43.100406   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:10:43.100424   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:10:43.100435   28654 main.go:141] libmachine: (ha-199780) DBG | Checking permissions on dir: /home
	I1009 19:10:43.100443   28654 main.go:141] libmachine: (ha-199780) DBG | Skipping /home - not owner
	I1009 19:10:43.100455   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:10:43.100467   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:10:43.100476   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:10:43.100483   28654 main.go:141] libmachine: (ha-199780) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:10:43.100487   28654 main.go:141] libmachine: (ha-199780) Creating domain...
	I1009 19:10:43.101601   28654 main.go:141] libmachine: (ha-199780) define libvirt domain using xml: 
	I1009 19:10:43.101609   28654 main.go:141] libmachine: (ha-199780) <domain type='kvm'>
	I1009 19:10:43.101614   28654 main.go:141] libmachine: (ha-199780)   <name>ha-199780</name>
	I1009 19:10:43.101624   28654 main.go:141] libmachine: (ha-199780)   <memory unit='MiB'>2200</memory>
	I1009 19:10:43.101632   28654 main.go:141] libmachine: (ha-199780)   <vcpu>2</vcpu>
	I1009 19:10:43.101638   28654 main.go:141] libmachine: (ha-199780)   <features>
	I1009 19:10:43.101646   28654 main.go:141] libmachine: (ha-199780)     <acpi/>
	I1009 19:10:43.101656   28654 main.go:141] libmachine: (ha-199780)     <apic/>
	I1009 19:10:43.101664   28654 main.go:141] libmachine: (ha-199780)     <pae/>
	I1009 19:10:43.101673   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.101686   28654 main.go:141] libmachine: (ha-199780)   </features>
	I1009 19:10:43.101695   28654 main.go:141] libmachine: (ha-199780)   <cpu mode='host-passthrough'>
	I1009 19:10:43.101702   28654 main.go:141] libmachine: (ha-199780)   
	I1009 19:10:43.101711   28654 main.go:141] libmachine: (ha-199780)   </cpu>
	I1009 19:10:43.101752   28654 main.go:141] libmachine: (ha-199780)   <os>
	I1009 19:10:43.101769   28654 main.go:141] libmachine: (ha-199780)     <type>hvm</type>
	I1009 19:10:43.101776   28654 main.go:141] libmachine: (ha-199780)     <boot dev='cdrom'/>
	I1009 19:10:43.101783   28654 main.go:141] libmachine: (ha-199780)     <boot dev='hd'/>
	I1009 19:10:43.101819   28654 main.go:141] libmachine: (ha-199780)     <bootmenu enable='no'/>
	I1009 19:10:43.101840   28654 main.go:141] libmachine: (ha-199780)   </os>
	I1009 19:10:43.101848   28654 main.go:141] libmachine: (ha-199780)   <devices>
	I1009 19:10:43.101855   28654 main.go:141] libmachine: (ha-199780)     <disk type='file' device='cdrom'>
	I1009 19:10:43.101864   28654 main.go:141] libmachine: (ha-199780)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/boot2docker.iso'/>
	I1009 19:10:43.101869   28654 main.go:141] libmachine: (ha-199780)       <target dev='hdc' bus='scsi'/>
	I1009 19:10:43.101877   28654 main.go:141] libmachine: (ha-199780)       <readonly/>
	I1009 19:10:43.101881   28654 main.go:141] libmachine: (ha-199780)     </disk>
	I1009 19:10:43.101887   28654 main.go:141] libmachine: (ha-199780)     <disk type='file' device='disk'>
	I1009 19:10:43.101894   28654 main.go:141] libmachine: (ha-199780)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:10:43.101901   28654 main.go:141] libmachine: (ha-199780)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/ha-199780.rawdisk'/>
	I1009 19:10:43.101908   28654 main.go:141] libmachine: (ha-199780)       <target dev='hda' bus='virtio'/>
	I1009 19:10:43.101913   28654 main.go:141] libmachine: (ha-199780)     </disk>
	I1009 19:10:43.101919   28654 main.go:141] libmachine: (ha-199780)     <interface type='network'>
	I1009 19:10:43.101933   28654 main.go:141] libmachine: (ha-199780)       <source network='mk-ha-199780'/>
	I1009 19:10:43.101946   28654 main.go:141] libmachine: (ha-199780)       <model type='virtio'/>
	I1009 19:10:43.101959   28654 main.go:141] libmachine: (ha-199780)     </interface>
	I1009 19:10:43.101969   28654 main.go:141] libmachine: (ha-199780)     <interface type='network'>
	I1009 19:10:43.101978   28654 main.go:141] libmachine: (ha-199780)       <source network='default'/>
	I1009 19:10:43.101987   28654 main.go:141] libmachine: (ha-199780)       <model type='virtio'/>
	I1009 19:10:43.101995   28654 main.go:141] libmachine: (ha-199780)     </interface>
	I1009 19:10:43.102004   28654 main.go:141] libmachine: (ha-199780)     <serial type='pty'>
	I1009 19:10:43.102012   28654 main.go:141] libmachine: (ha-199780)       <target port='0'/>
	I1009 19:10:43.102025   28654 main.go:141] libmachine: (ha-199780)     </serial>
	I1009 19:10:43.102042   28654 main.go:141] libmachine: (ha-199780)     <console type='pty'>
	I1009 19:10:43.102058   28654 main.go:141] libmachine: (ha-199780)       <target type='serial' port='0'/>
	I1009 19:10:43.102072   28654 main.go:141] libmachine: (ha-199780)     </console>
	I1009 19:10:43.102081   28654 main.go:141] libmachine: (ha-199780)     <rng model='virtio'>
	I1009 19:10:43.102095   28654 main.go:141] libmachine: (ha-199780)       <backend model='random'>/dev/random</backend>
	I1009 19:10:43.102102   28654 main.go:141] libmachine: (ha-199780)     </rng>
	I1009 19:10:43.102106   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.102114   28654 main.go:141] libmachine: (ha-199780)     
	I1009 19:10:43.102124   28654 main.go:141] libmachine: (ha-199780)   </devices>
	I1009 19:10:43.102131   28654 main.go:141] libmachine: (ha-199780) </domain>
	I1009 19:10:43.102144   28654 main.go:141] libmachine: (ha-199780) 
	I1009 19:10:43.106174   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:62:13:83 in network default
	I1009 19:10:43.106715   28654 main.go:141] libmachine: (ha-199780) Ensuring networks are active...
	I1009 19:10:43.106743   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:43.107417   28654 main.go:141] libmachine: (ha-199780) Ensuring network default is active
	I1009 19:10:43.107748   28654 main.go:141] libmachine: (ha-199780) Ensuring network mk-ha-199780 is active
	I1009 19:10:43.108262   28654 main.go:141] libmachine: (ha-199780) Getting domain xml...
	I1009 19:10:43.109003   28654 main.go:141] libmachine: (ha-199780) Creating domain...
	I1009 19:10:44.275323   28654 main.go:141] libmachine: (ha-199780) Waiting to get IP...
	I1009 19:10:44.276021   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.276397   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.276440   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.276393   28677 retry.go:31] will retry after 234.976528ms: waiting for machine to come up
	I1009 19:10:44.512805   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.513239   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.513266   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.513207   28677 retry.go:31] will retry after 293.441421ms: waiting for machine to come up
	I1009 19:10:44.808637   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:44.809099   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:44.809119   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:44.809062   28677 retry.go:31] will retry after 303.641198ms: waiting for machine to come up
	I1009 19:10:45.114382   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:45.114813   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:45.114842   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:45.114772   28677 retry.go:31] will retry after 536.014176ms: waiting for machine to come up
	I1009 19:10:45.652428   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:45.652792   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:45.652818   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:45.652745   28677 retry.go:31] will retry after 705.110787ms: waiting for machine to come up
	I1009 19:10:46.359497   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:46.360044   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:46.360101   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:46.360017   28677 retry.go:31] will retry after 647.020654ms: waiting for machine to come up
	I1009 19:10:47.008863   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:47.009323   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:47.009364   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:47.009282   28677 retry.go:31] will retry after 1.0294982s: waiting for machine to come up
	I1009 19:10:48.039832   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:48.040304   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:48.040326   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:48.040267   28677 retry.go:31] will retry after 1.106767931s: waiting for machine to come up
	I1009 19:10:49.148646   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:49.149054   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:49.149076   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:49.149026   28677 retry.go:31] will retry after 1.376949133s: waiting for machine to come up
	I1009 19:10:50.527437   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:50.527855   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:50.527877   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:50.527806   28677 retry.go:31] will retry after 1.480550438s: waiting for machine to come up
	I1009 19:10:52.009673   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:52.010195   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:52.010224   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:52.010161   28677 retry.go:31] will retry after 2.407652517s: waiting for machine to come up
	I1009 19:10:54.420236   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:54.420627   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:54.420661   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:54.420596   28677 retry.go:31] will retry after 3.410708317s: waiting for machine to come up
	I1009 19:10:57.833396   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:10:57.833828   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:10:57.833855   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:10:57.833781   28677 retry.go:31] will retry after 3.08007179s: waiting for machine to come up
	I1009 19:11:00.918052   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:00.918375   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find current IP address of domain ha-199780 in network mk-ha-199780
	I1009 19:11:00.918394   28654 main.go:141] libmachine: (ha-199780) DBG | I1009 19:11:00.918349   28677 retry.go:31] will retry after 3.66383863s: waiting for machine to come up
	I1009 19:11:04.584755   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.585113   28654 main.go:141] libmachine: (ha-199780) Found IP for machine: 192.168.39.114
	I1009 19:11:04.585143   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has current primary IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.585150   28654 main.go:141] libmachine: (ha-199780) Reserving static IP address...
	I1009 19:11:04.585468   28654 main.go:141] libmachine: (ha-199780) DBG | unable to find host DHCP lease matching {name: "ha-199780", mac: "52:54:00:5a:16:82", ip: "192.168.39.114"} in network mk-ha-199780
	I1009 19:11:04.653177   28654 main.go:141] libmachine: (ha-199780) DBG | Getting to WaitForSSH function...
	I1009 19:11:04.653210   28654 main.go:141] libmachine: (ha-199780) Reserved static IP address: 192.168.39.114
	I1009 19:11:04.653224   28654 main.go:141] libmachine: (ha-199780) Waiting for SSH to be available...
	I1009 19:11:04.655641   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.655950   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.655974   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.656128   28654 main.go:141] libmachine: (ha-199780) DBG | Using SSH client type: external
	I1009 19:11:04.656155   28654 main.go:141] libmachine: (ha-199780) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa (-rw-------)
	I1009 19:11:04.656182   28654 main.go:141] libmachine: (ha-199780) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.114 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:11:04.656192   28654 main.go:141] libmachine: (ha-199780) DBG | About to run SSH command:
	I1009 19:11:04.656207   28654 main.go:141] libmachine: (ha-199780) DBG | exit 0
	I1009 19:11:04.778875   28654 main.go:141] libmachine: (ha-199780) DBG | SSH cmd err, output: <nil>: 
	I1009 19:11:04.779170   28654 main.go:141] libmachine: (ha-199780) KVM machine creation complete!
	I1009 19:11:04.779478   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:11:04.780010   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:04.780176   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:04.780315   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:11:04.780331   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:04.781523   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:11:04.781541   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:11:04.781546   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:11:04.781551   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.783979   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.784330   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.784354   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.784520   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.784676   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.784815   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.784920   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.785023   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.785198   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.785208   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:11:04.886621   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:04.886642   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:11:04.886652   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.889117   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.889470   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.889489   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.889658   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.889825   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.889979   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.890105   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.890280   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.890429   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.890439   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:11:04.991626   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:11:04.991752   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:11:04.991763   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:11:04.991772   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:04.991975   28654 buildroot.go:166] provisioning hostname "ha-199780"
	I1009 19:11:04.991994   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:04.992147   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:04.994446   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.994806   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:04.994831   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:04.994954   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:04.995140   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.995287   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:04.995424   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:04.995557   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:04.995745   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:04.995756   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780 && echo "ha-199780" | sudo tee /etc/hostname
	I1009 19:11:05.113349   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780
	
	I1009 19:11:05.113396   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.116625   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.117021   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.117049   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.117198   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.117349   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.117468   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.117570   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.117692   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.117857   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.117885   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:05.228123   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:05.228148   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:11:05.228172   28654 buildroot.go:174] setting up certificates
	I1009 19:11:05.228182   28654 provision.go:84] configureAuth start
	I1009 19:11:05.228189   28654 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:11:05.228442   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.230797   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.231092   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.231117   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.231241   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.233255   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.233547   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.233569   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.233652   28654 provision.go:143] copyHostCerts
	I1009 19:11:05.233688   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:05.233736   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:11:05.233748   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:05.233826   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:11:05.233942   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:05.233970   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:11:05.233976   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:05.234005   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:11:05.234063   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:05.234084   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:11:05.234090   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:05.234111   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:11:05.234159   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780 san=[127.0.0.1 192.168.39.114 ha-199780 localhost minikube]
	I1009 19:11:05.299525   28654 provision.go:177] copyRemoteCerts
	I1009 19:11:05.299577   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:05.299597   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.301859   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.302122   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.302159   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.302298   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.302456   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.302593   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.302710   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.385328   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:11:05.385392   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:11:05.408377   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:11:05.408446   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:11:05.431231   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:11:05.431308   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:11:05.454941   28654 provision.go:87] duration metric: took 226.750506ms to configureAuth
	I1009 19:11:05.454965   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:11:05.455145   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:05.455206   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.457741   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.458006   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.458042   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.458216   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.458397   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.458525   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.458644   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.458788   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.458960   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.458976   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:05.676474   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:05.676512   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:11:05.676522   28654 main.go:141] libmachine: (ha-199780) Calling .GetURL
	I1009 19:11:05.677728   28654 main.go:141] libmachine: (ha-199780) DBG | Using libvirt version 6000000
	I1009 19:11:05.679755   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.680041   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.680069   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.680196   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:11:05.680210   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:11:05.680217   28654 client.go:171] duration metric: took 23.163762708s to LocalClient.Create
	I1009 19:11:05.680235   28654 start.go:167] duration metric: took 23.163818343s to libmachine.API.Create "ha-199780"
	I1009 19:11:05.680244   28654 start.go:293] postStartSetup for "ha-199780" (driver="kvm2")
	I1009 19:11:05.680255   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:05.680269   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.680459   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:05.680481   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.682388   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.682658   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.682683   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.682747   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.682909   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.683039   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.683197   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.767177   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:05.771701   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:11:05.771721   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:11:05.771790   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:11:05.771869   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:11:05.771881   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:11:05.771984   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:11:05.783287   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:05.808917   28654 start.go:296] duration metric: took 128.662808ms for postStartSetup
	I1009 19:11:05.808956   28654 main.go:141] libmachine: (ha-199780) Calling .GetConfigRaw
	I1009 19:11:05.809504   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.812016   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.812350   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.812373   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.812566   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:05.812738   28654 start.go:128] duration metric: took 23.313722048s to createHost
	I1009 19:11:05.812762   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.814746   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.815037   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.815078   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.815176   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.815323   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.815479   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.815598   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.815737   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:05.815932   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:11:05.815953   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:11:05.919951   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501065.894358321
	
	I1009 19:11:05.919974   28654 fix.go:216] guest clock: 1728501065.894358321
	I1009 19:11:05.919982   28654 fix.go:229] Guest: 2024-10-09 19:11:05.894358321 +0000 UTC Remote: 2024-10-09 19:11:05.812750418 +0000 UTC m=+23.417944098 (delta=81.607903ms)
	I1009 19:11:05.920005   28654 fix.go:200] guest clock delta is within tolerance: 81.607903ms
	I1009 19:11:05.920012   28654 start.go:83] releasing machines lock for "ha-199780", held for 23.421078352s
	I1009 19:11:05.920035   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.920263   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:05.922615   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.922966   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.922995   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.923150   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923568   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923734   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:05.923824   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:05.923862   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.924006   28654 ssh_runner.go:195] Run: cat /version.json
	I1009 19:11:05.924044   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:05.926446   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926648   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926765   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.926802   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.926912   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.927037   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:05.927038   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.927086   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:05.927223   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.927272   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:05.927339   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:05.927433   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:05.927750   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:05.927897   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:06.024499   28654 ssh_runner.go:195] Run: systemctl --version
	I1009 19:11:06.030414   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:06.185061   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:06.191423   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:06.191490   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:06.206786   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:11:06.206805   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:11:06.206857   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:06.222401   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:06.235373   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:11:06.235433   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:06.247949   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:06.260686   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:06.376406   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:06.514646   28654 docker.go:233] disabling docker service ...
	I1009 19:11:06.514703   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:06.529298   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:06.542407   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:06.674904   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:06.805457   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:06.819076   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:06.839480   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:11:06.839538   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.851838   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:11:06.851893   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.864160   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.876368   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.889066   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:06.901093   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.912169   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.929058   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:06.939929   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:06.949542   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:11:06.949583   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:11:06.962939   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:06.972697   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:07.093662   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:07.192295   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:07.192352   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:07.197105   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:11:07.197162   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:11:07.200935   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:11:07.247609   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:11:07.247689   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:07.275380   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:07.304930   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:11:07.306083   28654 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:11:07.308768   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:07.309094   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:07.309121   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:07.309303   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:07.313459   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:07.326691   28654 kubeadm.go:883] updating cluster {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:11:07.326798   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:11:07.326859   28654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:07.358942   28654 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 19:11:07.359000   28654 ssh_runner.go:195] Run: which lz4
	I1009 19:11:07.363007   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1009 19:11:07.363119   28654 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 19:11:07.367226   28654 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 19:11:07.367262   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 19:11:08.682998   28654 crio.go:462] duration metric: took 1.319910565s to copy over tarball
	I1009 19:11:08.683082   28654 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 19:11:10.661640   28654 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.978525541s)
	I1009 19:11:10.661674   28654 crio.go:469] duration metric: took 1.978647131s to extract the tarball
	I1009 19:11:10.661683   28654 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 19:11:10.698452   28654 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:11:10.744870   28654 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:11:10.744890   28654 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:11:10.744897   28654 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.31.1 crio true true} ...
	I1009 19:11:10.744976   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:10.745041   28654 ssh_runner.go:195] Run: crio config
	I1009 19:11:10.794773   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:11:10.794792   28654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:11:10.794807   28654 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:11:10.794828   28654 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-199780 NodeName:ha-199780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:11:10.794978   28654 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-199780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:11:10.795005   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:11:10.795055   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:11:10.811512   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:11:10.811631   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:11:10.811693   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:10.821887   28654 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:11:10.821946   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:11:10.831583   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1009 19:11:10.848385   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:10.865617   28654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1009 19:11:10.882082   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1009 19:11:10.898198   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:10.902054   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:10.914494   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:11.043972   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:11.060509   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.114
	I1009 19:11:11.060533   28654 certs.go:194] generating shared ca certs ...
	I1009 19:11:11.060553   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.060728   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:11:11.060785   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:11:11.060798   28654 certs.go:256] generating profile certs ...
	I1009 19:11:11.060867   28654 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:11:11.060891   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt with IP's: []
	I1009 19:11:11.257901   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt ...
	I1009 19:11:11.257931   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt: {Name:mke6971132fee40da37bc72041e92dde05b5c360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.258111   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key ...
	I1009 19:11:11.258127   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key: {Name:mk2c48ceaf748f5efc5f062df1cf8bf8d38b626a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.258227   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621
	I1009 19:11:11.258246   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.254]
	I1009 19:11:11.502202   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 ...
	I1009 19:11:11.502241   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621: {Name:mk85bc5cf43d418e43d8be4b6611eb785caa9f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.502445   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621 ...
	I1009 19:11:11.502463   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621: {Name:mk1d94ea93b96fe750cd9f95170ab488ca016856 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.502573   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.4b78b621 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:11:11.502721   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.4b78b621 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:11:11.502815   28654 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:11:11.502839   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt with IP's: []
	I1009 19:11:11.612443   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt ...
	I1009 19:11:11.612470   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt: {Name:mk212b018e6441944e189239707af3950678c689 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.612646   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key ...
	I1009 19:11:11.612656   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key: {Name:mkb7f3d492b787f9b9b56d2b48939b9971f793ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:11.612724   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:11:11.612740   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:11:11.612751   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:11:11.612763   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:11:11.612774   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:11:11.612786   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:11:11.612798   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:11:11.612810   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:11:11.612864   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:11:11.612897   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:11.612903   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:11:11.612926   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:11:11.612951   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:11.612971   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:11:11.613006   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:11.613033   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.613046   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.613058   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:11:11.613596   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:11.638855   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:11:11.662787   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:11.686693   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:11:11.710429   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:11:11.734032   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:11.757651   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:11.781611   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:11.805128   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:11.831515   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:11:11.878516   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:11:11.903576   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:11:11.920589   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:11:11.926400   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:11.937651   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.942167   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.942223   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:11.947902   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:11.959013   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:11:11.970169   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.974738   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.974799   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:11:11.980430   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:11.991569   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:11:12.002421   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.006666   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.006711   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:11:12.012305   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:12.023435   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:12.027428   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:11:12.027474   28654 kubeadm.go:392] StartCluster: {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:12.027535   28654 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:11:12.027572   28654 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:11:12.068414   28654 cri.go:89] found id: ""
	I1009 19:11:12.068473   28654 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 19:11:12.078653   28654 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 19:11:12.088659   28654 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 19:11:12.098391   28654 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 19:11:12.098408   28654 kubeadm.go:157] found existing configuration files:
	
	I1009 19:11:12.098445   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 19:11:12.107757   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 19:11:12.107807   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 19:11:12.117369   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 19:11:12.126789   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 19:11:12.126847   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 19:11:12.136637   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 19:11:12.146308   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 19:11:12.146364   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 19:11:12.156469   28654 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 19:11:12.165834   28654 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 19:11:12.165886   28654 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 19:11:12.175515   28654 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 19:11:12.280177   28654 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 19:11:12.280255   28654 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 19:11:12.386423   28654 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 19:11:12.386621   28654 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 19:11:12.386752   28654 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 19:11:12.404964   28654 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 19:11:12.482162   28654 out.go:235]   - Generating certificates and keys ...
	I1009 19:11:12.482262   28654 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 19:11:12.482346   28654 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 19:11:12.648552   28654 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 19:11:12.833455   28654 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 19:11:13.055850   28654 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 19:11:13.322371   28654 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 19:11:13.484433   28654 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 19:11:13.484631   28654 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-199780 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I1009 19:11:13.583799   28654 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 19:11:13.584031   28654 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-199780 localhost] and IPs [192.168.39.114 127.0.0.1 ::1]
	I1009 19:11:14.090538   28654 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 19:11:14.260812   28654 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 19:11:14.391262   28654 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 19:11:14.391369   28654 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 19:11:14.744340   28654 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 19:11:14.834478   28654 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 19:11:14.925339   28654 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 19:11:15.080024   28654 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 19:11:15.271189   28654 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 19:11:15.271810   28654 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 19:11:15.277194   28654 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 19:11:15.369554   28654 out.go:235]   - Booting up control plane ...
	I1009 19:11:15.369723   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 19:11:15.369842   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 19:11:15.369937   28654 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 19:11:15.370057   28654 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 19:11:15.370148   28654 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 19:11:15.370183   28654 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 19:11:15.445224   28654 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 19:11:15.445341   28654 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 19:11:16.448580   28654 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005128821s
	I1009 19:11:16.448662   28654 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 19:11:22.061566   28654 kubeadm.go:310] [api-check] The API server is healthy after 5.61687232s
	I1009 19:11:22.078904   28654 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 19:11:22.108560   28654 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 19:11:22.646139   28654 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 19:11:22.646344   28654 kubeadm.go:310] [mark-control-plane] Marking the node ha-199780 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 19:11:22.657702   28654 kubeadm.go:310] [bootstrap-token] Using token: n3skeb.bws3ifw22cumajmm
	I1009 19:11:22.659119   28654 out.go:235]   - Configuring RBAC rules ...
	I1009 19:11:22.659267   28654 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 19:11:22.664574   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 19:11:22.677942   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 19:11:22.681624   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 19:11:22.685155   28654 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 19:11:22.689541   28654 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 19:11:22.705080   28654 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 19:11:22.957052   28654 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 19:11:23.469842   28654 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 19:11:23.470871   28654 kubeadm.go:310] 
	I1009 19:11:23.470925   28654 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 19:11:23.470933   28654 kubeadm.go:310] 
	I1009 19:11:23.471051   28654 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 19:11:23.471083   28654 kubeadm.go:310] 
	I1009 19:11:23.471125   28654 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 19:11:23.471223   28654 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 19:11:23.471271   28654 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 19:11:23.471296   28654 kubeadm.go:310] 
	I1009 19:11:23.471380   28654 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 19:11:23.471393   28654 kubeadm.go:310] 
	I1009 19:11:23.471455   28654 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 19:11:23.471464   28654 kubeadm.go:310] 
	I1009 19:11:23.471537   28654 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 19:11:23.471641   28654 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 19:11:23.471738   28654 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 19:11:23.471753   28654 kubeadm.go:310] 
	I1009 19:11:23.471870   28654 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 19:11:23.471974   28654 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 19:11:23.471984   28654 kubeadm.go:310] 
	I1009 19:11:23.472086   28654 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token n3skeb.bws3ifw22cumajmm \
	I1009 19:11:23.472234   28654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 19:11:23.472263   28654 kubeadm.go:310] 	--control-plane 
	I1009 19:11:23.472276   28654 kubeadm.go:310] 
	I1009 19:11:23.472382   28654 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 19:11:23.472392   28654 kubeadm.go:310] 
	I1009 19:11:23.472488   28654 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token n3skeb.bws3ifw22cumajmm \
	I1009 19:11:23.472616   28654 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 19:11:23.473525   28654 kubeadm.go:310] W1009 19:11:12.257145     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:11:23.473837   28654 kubeadm.go:310] W1009 19:11:12.259703     838 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 19:11:23.473994   28654 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 19:11:23.474033   28654 cni.go:84] Creating CNI manager for ""
	I1009 19:11:23.474046   28654 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1009 19:11:23.475963   28654 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 19:11:23.477363   28654 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 19:11:23.483529   28654 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 19:11:23.483553   28654 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 19:11:23.504303   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 19:11:23.863157   28654 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 19:11:23.863274   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:23.863284   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780 minikube.k8s.io/updated_at=2024_10_09T19_11_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=true
	I1009 19:11:23.884152   28654 ops.go:34] apiserver oom_adj: -16
	I1009 19:11:24.005714   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:24.506374   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:25.006091   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:25.506438   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:26.006141   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:26.506040   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.006400   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.505831   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 19:11:27.598386   28654 kubeadm.go:1113] duration metric: took 3.735177044s to wait for elevateKubeSystemPrivileges
	I1009 19:11:27.598425   28654 kubeadm.go:394] duration metric: took 15.5709527s to StartCluster
	I1009 19:11:27.598446   28654 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:27.598527   28654 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:11:27.599166   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:27.599347   28654 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:27.599374   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:11:27.599357   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 19:11:27.599375   28654 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 19:11:27.599458   28654 addons.go:69] Setting storage-provisioner=true in profile "ha-199780"
	I1009 19:11:27.599469   28654 addons.go:69] Setting default-storageclass=true in profile "ha-199780"
	I1009 19:11:27.599477   28654 addons.go:234] Setting addon storage-provisioner=true in "ha-199780"
	I1009 19:11:27.599485   28654 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-199780"
	I1009 19:11:27.599503   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:27.599506   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:27.599886   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.599927   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.599929   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.599968   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.614342   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
	I1009 19:11:27.614587   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I1009 19:11:27.614820   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.615004   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.615360   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.615381   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.615494   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.615521   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.615770   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.615869   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.615936   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.616437   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.616482   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.618027   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:11:27.618409   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 19:11:27.618933   28654 cert_rotation.go:140] Starting client certificate rotation controller
	I1009 19:11:27.619199   28654 addons.go:234] Setting addon default-storageclass=true in "ha-199780"
	I1009 19:11:27.619240   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:27.619589   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.619644   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.631880   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45567
	I1009 19:11:27.632439   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.632953   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.632968   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.633306   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.633511   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.633650   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I1009 19:11:27.634127   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.634757   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.634777   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.635148   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.635306   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:27.635705   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:27.635747   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:27.637278   28654 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 19:11:27.638972   28654 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:11:27.638992   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 19:11:27.639008   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:27.642192   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.642642   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:27.642674   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.642796   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:27.642968   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:27.643174   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:27.643344   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:27.651531   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42091
	I1009 19:11:27.652010   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:27.652633   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:27.652663   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:27.652996   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:27.653186   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:27.654702   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:27.654903   28654 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 19:11:27.654916   28654 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 19:11:27.654931   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:27.657462   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.657809   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:27.657834   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:27.657997   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:27.658162   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:27.658275   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:27.658409   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:27.708249   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 19:11:27.824778   28654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 19:11:27.831460   28654 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 19:11:28.120955   28654 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1009 19:11:28.573087   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573114   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573134   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573150   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573505   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573520   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573544   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573545   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573557   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573510   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573628   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573649   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.573658   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573565   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.573900   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.573917   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573930   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.573931   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573940   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.573984   28654 main.go:141] libmachine: (ha-199780) DBG | Closing plugin on server side
	I1009 19:11:28.574002   28654 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 19:11:28.574017   28654 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 19:11:28.574123   28654 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1009 19:11:28.574129   28654 round_trippers.go:469] Request Headers:
	I1009 19:11:28.574140   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:11:28.574147   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:11:28.586337   28654 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1009 19:11:28.587207   28654 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1009 19:11:28.587225   28654 round_trippers.go:469] Request Headers:
	I1009 19:11:28.587233   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:11:28.587241   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:11:28.587251   28654 round_trippers.go:473]     Content-Type: application/json
	I1009 19:11:28.594277   28654 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 19:11:28.594441   28654 main.go:141] libmachine: Making call to close driver server
	I1009 19:11:28.594457   28654 main.go:141] libmachine: (ha-199780) Calling .Close
	I1009 19:11:28.594703   28654 main.go:141] libmachine: Successfully made call to close driver server
	I1009 19:11:28.594721   28654 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 19:11:28.596581   28654 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1009 19:11:28.597699   28654 addons.go:510] duration metric: took 998.327173ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 19:11:28.597726   28654 start.go:246] waiting for cluster config update ...
	I1009 19:11:28.597735   28654 start.go:255] writing updated cluster config ...
	I1009 19:11:28.599169   28654 out.go:201] 
	I1009 19:11:28.600456   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:28.600538   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:28.601965   28654 out.go:177] * Starting "ha-199780-m02" control-plane node in "ha-199780" cluster
	I1009 19:11:28.602974   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:11:28.602993   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:11:28.603093   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:11:28.603107   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:11:28.603182   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:28.603350   28654 start.go:360] acquireMachinesLock for ha-199780-m02: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:11:28.603394   28654 start.go:364] duration metric: took 25.364µs to acquireMachinesLock for "ha-199780-m02"
	I1009 19:11:28.603415   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:28.603505   28654 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1009 19:11:28.604883   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:11:28.604963   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:28.604996   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:28.620174   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I1009 19:11:28.620709   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:28.621235   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:28.621259   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:28.621551   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:28.621737   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:28.621880   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:28.622077   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:11:28.622107   28654 client.go:168] LocalClient.Create starting
	I1009 19:11:28.622146   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:11:28.622193   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:11:28.622213   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:11:28.622278   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:11:28.622306   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:11:28.622322   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:11:28.622345   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:11:28.622356   28654 main.go:141] libmachine: (ha-199780-m02) Calling .PreCreateCheck
	I1009 19:11:28.622534   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:28.622992   28654 main.go:141] libmachine: Creating machine...
	I1009 19:11:28.623009   28654 main.go:141] libmachine: (ha-199780-m02) Calling .Create
	I1009 19:11:28.623202   28654 main.go:141] libmachine: (ha-199780-m02) Creating KVM machine...
	I1009 19:11:28.624414   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found existing default KVM network
	I1009 19:11:28.624553   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found existing private KVM network mk-ha-199780
	I1009 19:11:28.624697   28654 main.go:141] libmachine: (ha-199780-m02) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 ...
	I1009 19:11:28.624717   28654 main.go:141] libmachine: (ha-199780-m02) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:11:28.627180   28654 main.go:141] libmachine: (ha-199780-m02) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:11:28.627222   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:28.624673   29017 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:11:28.859004   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:28.858864   29017 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa...
	I1009 19:11:29.192250   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:29.192144   29017 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/ha-199780-m02.rawdisk...
	I1009 19:11:29.192281   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Writing magic tar header
	I1009 19:11:29.192291   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Writing SSH key tar header
	I1009 19:11:29.192299   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:29.192250   29017 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 ...
	I1009 19:11:29.192353   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02
	I1009 19:11:29.192372   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:11:29.192385   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02 (perms=drwx------)
	I1009 19:11:29.192398   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:11:29.192410   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:11:29.192419   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:11:29.192426   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:11:29.192433   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Checking permissions on dir: /home
	I1009 19:11:29.192451   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Skipping /home - not owner
	I1009 19:11:29.192471   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:11:29.192484   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:11:29.192493   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:11:29.192501   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:11:29.192508   28654 main.go:141] libmachine: (ha-199780-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:11:29.192515   28654 main.go:141] libmachine: (ha-199780-m02) Creating domain...
	I1009 19:11:29.193313   28654 main.go:141] libmachine: (ha-199780-m02) define libvirt domain using xml: 
	I1009 19:11:29.193342   28654 main.go:141] libmachine: (ha-199780-m02) <domain type='kvm'>
	I1009 19:11:29.193353   28654 main.go:141] libmachine: (ha-199780-m02)   <name>ha-199780-m02</name>
	I1009 19:11:29.193360   28654 main.go:141] libmachine: (ha-199780-m02)   <memory unit='MiB'>2200</memory>
	I1009 19:11:29.193368   28654 main.go:141] libmachine: (ha-199780-m02)   <vcpu>2</vcpu>
	I1009 19:11:29.193381   28654 main.go:141] libmachine: (ha-199780-m02)   <features>
	I1009 19:11:29.193404   28654 main.go:141] libmachine: (ha-199780-m02)     <acpi/>
	I1009 19:11:29.193418   28654 main.go:141] libmachine: (ha-199780-m02)     <apic/>
	I1009 19:11:29.193448   28654 main.go:141] libmachine: (ha-199780-m02)     <pae/>
	I1009 19:11:29.193470   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193481   28654 main.go:141] libmachine: (ha-199780-m02)   </features>
	I1009 19:11:29.193502   28654 main.go:141] libmachine: (ha-199780-m02)   <cpu mode='host-passthrough'>
	I1009 19:11:29.193521   28654 main.go:141] libmachine: (ha-199780-m02)   
	I1009 19:11:29.193531   28654 main.go:141] libmachine: (ha-199780-m02)   </cpu>
	I1009 19:11:29.193548   28654 main.go:141] libmachine: (ha-199780-m02)   <os>
	I1009 19:11:29.193569   28654 main.go:141] libmachine: (ha-199780-m02)     <type>hvm</type>
	I1009 19:11:29.193584   28654 main.go:141] libmachine: (ha-199780-m02)     <boot dev='cdrom'/>
	I1009 19:11:29.193597   28654 main.go:141] libmachine: (ha-199780-m02)     <boot dev='hd'/>
	I1009 19:11:29.193605   28654 main.go:141] libmachine: (ha-199780-m02)     <bootmenu enable='no'/>
	I1009 19:11:29.193614   28654 main.go:141] libmachine: (ha-199780-m02)   </os>
	I1009 19:11:29.193622   28654 main.go:141] libmachine: (ha-199780-m02)   <devices>
	I1009 19:11:29.193631   28654 main.go:141] libmachine: (ha-199780-m02)     <disk type='file' device='cdrom'>
	I1009 19:11:29.193644   28654 main.go:141] libmachine: (ha-199780-m02)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/boot2docker.iso'/>
	I1009 19:11:29.193658   28654 main.go:141] libmachine: (ha-199780-m02)       <target dev='hdc' bus='scsi'/>
	I1009 19:11:29.193669   28654 main.go:141] libmachine: (ha-199780-m02)       <readonly/>
	I1009 19:11:29.193678   28654 main.go:141] libmachine: (ha-199780-m02)     </disk>
	I1009 19:11:29.193692   28654 main.go:141] libmachine: (ha-199780-m02)     <disk type='file' device='disk'>
	I1009 19:11:29.193703   28654 main.go:141] libmachine: (ha-199780-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:11:29.193717   28654 main.go:141] libmachine: (ha-199780-m02)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/ha-199780-m02.rawdisk'/>
	I1009 19:11:29.193731   28654 main.go:141] libmachine: (ha-199780-m02)       <target dev='hda' bus='virtio'/>
	I1009 19:11:29.193743   28654 main.go:141] libmachine: (ha-199780-m02)     </disk>
	I1009 19:11:29.193752   28654 main.go:141] libmachine: (ha-199780-m02)     <interface type='network'>
	I1009 19:11:29.193764   28654 main.go:141] libmachine: (ha-199780-m02)       <source network='mk-ha-199780'/>
	I1009 19:11:29.193774   28654 main.go:141] libmachine: (ha-199780-m02)       <model type='virtio'/>
	I1009 19:11:29.193784   28654 main.go:141] libmachine: (ha-199780-m02)     </interface>
	I1009 19:11:29.193794   28654 main.go:141] libmachine: (ha-199780-m02)     <interface type='network'>
	I1009 19:11:29.193805   28654 main.go:141] libmachine: (ha-199780-m02)       <source network='default'/>
	I1009 19:11:29.193820   28654 main.go:141] libmachine: (ha-199780-m02)       <model type='virtio'/>
	I1009 19:11:29.193833   28654 main.go:141] libmachine: (ha-199780-m02)     </interface>
	I1009 19:11:29.193841   28654 main.go:141] libmachine: (ha-199780-m02)     <serial type='pty'>
	I1009 19:11:29.193855   28654 main.go:141] libmachine: (ha-199780-m02)       <target port='0'/>
	I1009 19:11:29.193865   28654 main.go:141] libmachine: (ha-199780-m02)     </serial>
	I1009 19:11:29.193871   28654 main.go:141] libmachine: (ha-199780-m02)     <console type='pty'>
	I1009 19:11:29.193881   28654 main.go:141] libmachine: (ha-199780-m02)       <target type='serial' port='0'/>
	I1009 19:11:29.193890   28654 main.go:141] libmachine: (ha-199780-m02)     </console>
	I1009 19:11:29.193901   28654 main.go:141] libmachine: (ha-199780-m02)     <rng model='virtio'>
	I1009 19:11:29.193911   28654 main.go:141] libmachine: (ha-199780-m02)       <backend model='random'>/dev/random</backend>
	I1009 19:11:29.193933   28654 main.go:141] libmachine: (ha-199780-m02)     </rng>
	I1009 19:11:29.193946   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193962   28654 main.go:141] libmachine: (ha-199780-m02)     
	I1009 19:11:29.193978   28654 main.go:141] libmachine: (ha-199780-m02)   </devices>
	I1009 19:11:29.193990   28654 main.go:141] libmachine: (ha-199780-m02) </domain>
	I1009 19:11:29.193999   28654 main.go:141] libmachine: (ha-199780-m02) 
	I1009 19:11:29.200233   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:9f:20:14 in network default
	I1009 19:11:29.200751   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring networks are active...
	I1009 19:11:29.200778   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:29.201355   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring network default is active
	I1009 19:11:29.201602   28654 main.go:141] libmachine: (ha-199780-m02) Ensuring network mk-ha-199780 is active
	I1009 19:11:29.201876   28654 main.go:141] libmachine: (ha-199780-m02) Getting domain xml...
	I1009 19:11:29.202487   28654 main.go:141] libmachine: (ha-199780-m02) Creating domain...
	I1009 19:11:30.395985   28654 main.go:141] libmachine: (ha-199780-m02) Waiting to get IP...
	I1009 19:11:30.396850   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.397221   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.397245   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.397192   29017 retry.go:31] will retry after 306.623748ms: waiting for machine to come up
	I1009 19:11:30.705681   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.706111   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.706142   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.706073   29017 retry.go:31] will retry after 272.886306ms: waiting for machine to come up
	I1009 19:11:30.980636   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:30.981119   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:30.981146   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:30.981081   29017 retry.go:31] will retry after 373.250902ms: waiting for machine to come up
	I1009 19:11:31.355561   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:31.355953   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:31.355981   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:31.355905   29017 retry.go:31] will retry after 402.386513ms: waiting for machine to come up
	I1009 19:11:31.759650   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:31.760178   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:31.760204   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:31.760143   29017 retry.go:31] will retry after 700.718844ms: waiting for machine to come up
	I1009 19:11:32.462533   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:32.462970   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:32.462999   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:32.462916   29017 retry.go:31] will retry after 892.701908ms: waiting for machine to come up
	I1009 19:11:33.357278   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:33.357677   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:33.357700   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:33.357645   29017 retry.go:31] will retry after 892.900741ms: waiting for machine to come up
	I1009 19:11:34.252184   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:34.252581   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:34.252605   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:34.252542   29017 retry.go:31] will retry after 919.729577ms: waiting for machine to come up
	I1009 19:11:35.174060   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:35.174445   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:35.174475   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:35.174422   29017 retry.go:31] will retry after 1.688669614s: waiting for machine to come up
	I1009 19:11:36.865075   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:36.865384   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:36.865412   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:36.865340   29017 retry.go:31] will retry after 1.768384485s: waiting for machine to come up
	I1009 19:11:38.635106   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:38.635545   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:38.635574   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:38.635487   29017 retry.go:31] will retry after 2.193559284s: waiting for machine to come up
	I1009 19:11:40.831238   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:40.831740   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:40.831780   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:40.831709   29017 retry.go:31] will retry after 3.434402997s: waiting for machine to come up
	I1009 19:11:44.267146   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:44.267644   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:44.267671   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:44.267602   29017 retry.go:31] will retry after 4.164642466s: waiting for machine to come up
	I1009 19:11:48.436657   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:48.436991   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find current IP address of domain ha-199780-m02 in network mk-ha-199780
	I1009 19:11:48.437015   28654 main.go:141] libmachine: (ha-199780-m02) DBG | I1009 19:11:48.436952   29017 retry.go:31] will retry after 3.860630111s: waiting for machine to come up
	I1009 19:11:52.302118   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.302487   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has current primary IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.302554   28654 main.go:141] libmachine: (ha-199780-m02) Found IP for machine: 192.168.39.83
	I1009 19:11:52.302579   28654 main.go:141] libmachine: (ha-199780-m02) Reserving static IP address...
	I1009 19:11:52.302886   28654 main.go:141] libmachine: (ha-199780-m02) DBG | unable to find host DHCP lease matching {name: "ha-199780-m02", mac: "52:54:00:49:9d:cf", ip: "192.168.39.83"} in network mk-ha-199780
	I1009 19:11:52.372076   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Getting to WaitForSSH function...
	I1009 19:11:52.372102   28654 main.go:141] libmachine: (ha-199780-m02) Reserved static IP address: 192.168.39.83
	I1009 19:11:52.372115   28654 main.go:141] libmachine: (ha-199780-m02) Waiting for SSH to be available...
	I1009 19:11:52.374841   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.375419   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:minikube Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.375450   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.375560   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using SSH client type: external
	I1009 19:11:52.375580   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa (-rw-------)
	I1009 19:11:52.375612   28654 main.go:141] libmachine: (ha-199780-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.83 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:11:52.375635   28654 main.go:141] libmachine: (ha-199780-m02) DBG | About to run SSH command:
	I1009 19:11:52.375646   28654 main.go:141] libmachine: (ha-199780-m02) DBG | exit 0
	I1009 19:11:52.498886   28654 main.go:141] libmachine: (ha-199780-m02) DBG | SSH cmd err, output: <nil>: 
	I1009 19:11:52.499168   28654 main.go:141] libmachine: (ha-199780-m02) KVM machine creation complete!
	I1009 19:11:52.499479   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:52.500069   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:52.500241   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:52.500393   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:11:52.500411   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetState
	I1009 19:11:52.501707   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:11:52.501728   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:11:52.501749   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:11:52.501756   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.503758   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.504142   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.504165   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.504286   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.504437   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.504575   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.504686   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.504794   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.504979   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.504989   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:11:52.602177   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:52.602204   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:11:52.602213   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.604728   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.605107   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.605141   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.605291   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.605469   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.605606   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.605724   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.605872   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.606034   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.606045   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:11:52.703707   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:11:52.703764   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:11:52.703771   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:11:52.703777   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.704032   28654 buildroot.go:166] provisioning hostname "ha-199780-m02"
	I1009 19:11:52.704060   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.704231   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.706798   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.707185   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.707208   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.707350   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.707510   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.707650   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.707773   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.707888   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.708063   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.708075   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780-m02 && echo "ha-199780-m02" | sudo tee /etc/hostname
	I1009 19:11:52.823258   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780-m02
	
	I1009 19:11:52.823287   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.825577   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.825861   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.825888   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.826053   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:52.826228   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.826361   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:52.826462   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:52.826604   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:52.826970   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:52.827005   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:11:52.936284   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:11:52.936322   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:11:52.936338   28654 buildroot.go:174] setting up certificates
	I1009 19:11:52.936349   28654 provision.go:84] configureAuth start
	I1009 19:11:52.936358   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetMachineName
	I1009 19:11:52.936621   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:52.939014   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.939357   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.939378   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.939565   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:52.941751   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.942083   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:52.942102   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:52.942262   28654 provision.go:143] copyHostCerts
	I1009 19:11:52.942292   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:52.942326   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:11:52.942335   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:11:52.942400   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:11:52.942490   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:52.942507   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:11:52.942513   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:11:52.942543   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:11:52.942586   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:52.942603   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:11:52.942608   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:11:52.942630   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:11:52.942675   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780-m02 san=[127.0.0.1 192.168.39.83 ha-199780-m02 localhost minikube]
	I1009 19:11:53.040172   28654 provision.go:177] copyRemoteCerts
	I1009 19:11:53.040224   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:11:53.040246   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.042771   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.043144   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.043165   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.043339   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.043536   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.043695   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.043830   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.125536   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:11:53.125611   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:11:53.152398   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:11:53.152462   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:11:53.176418   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:11:53.176476   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:11:53.199215   28654 provision.go:87] duration metric: took 262.855174ms to configureAuth
	I1009 19:11:53.199238   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:11:53.199408   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:53.199489   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.202051   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.202440   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.202470   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.202579   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.202742   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.202905   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.203044   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.203213   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:53.203367   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:53.203381   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:11:53.429894   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:11:53.429922   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:11:53.429933   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetURL
	I1009 19:11:53.431192   28654 main.go:141] libmachine: (ha-199780-m02) DBG | Using libvirt version 6000000
	I1009 19:11:53.433633   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.433917   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.433942   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.434095   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:11:53.434111   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:11:53.434119   28654 client.go:171] duration metric: took 24.812002035s to LocalClient.Create
	I1009 19:11:53.434141   28654 start.go:167] duration metric: took 24.812066243s to libmachine.API.Create "ha-199780"
	I1009 19:11:53.434153   28654 start.go:293] postStartSetup for "ha-199780-m02" (driver="kvm2")
	I1009 19:11:53.434164   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:11:53.434178   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.434386   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:11:53.434414   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.436444   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.436741   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.436766   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.436885   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.437048   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.437204   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.437329   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.517247   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:11:53.521546   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:11:53.521570   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:11:53.521628   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:11:53.521696   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:11:53.521706   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:11:53.521794   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:11:53.531170   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:53.555463   28654 start.go:296] duration metric: took 121.295956ms for postStartSetup
	I1009 19:11:53.555509   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetConfigRaw
	I1009 19:11:53.556089   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:53.558610   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.558965   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.558990   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.559241   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:11:53.559417   28654 start.go:128] duration metric: took 24.955894473s to createHost
	I1009 19:11:53.559436   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.561758   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.562120   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.562145   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.562297   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.562466   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.562603   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.562686   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.562800   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:11:53.562944   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.83 22 <nil> <nil>}
	I1009 19:11:53.562953   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:11:53.659740   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501113.618380735
	
	I1009 19:11:53.659761   28654 fix.go:216] guest clock: 1728501113.618380735
	I1009 19:11:53.659770   28654 fix.go:229] Guest: 2024-10-09 19:11:53.618380735 +0000 UTC Remote: 2024-10-09 19:11:53.559427397 +0000 UTC m=+71.164621077 (delta=58.953338ms)
	I1009 19:11:53.659789   28654 fix.go:200] guest clock delta is within tolerance: 58.953338ms
	I1009 19:11:53.659795   28654 start.go:83] releasing machines lock for "ha-199780-m02", held for 25.056389443s
	I1009 19:11:53.659818   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.660047   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:53.662723   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.663038   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.663084   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.665166   28654 out.go:177] * Found network options:
	I1009 19:11:53.666287   28654 out.go:177]   - NO_PROXY=192.168.39.114
	W1009 19:11:53.667466   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:11:53.667505   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.667962   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.668130   28654 main.go:141] libmachine: (ha-199780-m02) Calling .DriverName
	I1009 19:11:53.668248   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:11:53.668296   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	W1009 19:11:53.668300   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:11:53.668381   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:11:53.668416   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHHostname
	I1009 19:11:53.670930   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671210   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671283   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.671304   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671447   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.671527   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:53.671552   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:53.671587   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.671735   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHPort
	I1009 19:11:53.671750   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.671893   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHKeyPath
	I1009 19:11:53.671912   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.672014   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetSSHUsername
	I1009 19:11:53.672148   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.83 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m02/id_rsa Username:docker}
	I1009 19:11:53.899517   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:11:53.905678   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:11:53.905741   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:11:53.922185   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:11:53.922206   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:11:53.922263   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:11:53.937820   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:11:53.953029   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:11:53.953091   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:11:53.967078   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:11:53.981025   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:11:54.113745   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:11:54.255530   28654 docker.go:233] disabling docker service ...
	I1009 19:11:54.255587   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:11:54.270170   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:11:54.283110   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:11:54.427830   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:11:54.542861   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:11:54.559019   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:11:54.577775   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:11:54.577834   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.588489   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:11:54.588563   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.598988   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.609116   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.619104   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:11:54.629621   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.640002   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.656572   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:11:54.666994   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:11:54.677176   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:11:54.677232   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:11:54.689637   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:11:54.698765   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:54.819897   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:11:54.911734   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:11:54.911789   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:11:54.916451   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:11:54.916494   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:11:54.920158   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:11:54.955402   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:11:54.955480   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:54.982980   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:11:55.012563   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:11:55.013723   28654 out.go:177]   - env NO_PROXY=192.168.39.114
	I1009 19:11:55.014768   28654 main.go:141] libmachine: (ha-199780-m02) Calling .GetIP
	I1009 19:11:55.017153   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:55.017506   28654 main.go:141] libmachine: (ha-199780-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:49:9d:cf", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:11:43 +0000 UTC Type:0 Mac:52:54:00:49:9d:cf Iaid: IPaddr:192.168.39.83 Prefix:24 Hostname:ha-199780-m02 Clientid:01:52:54:00:49:9d:cf}
	I1009 19:11:55.017538   28654 main.go:141] libmachine: (ha-199780-m02) DBG | domain ha-199780-m02 has defined IP address 192.168.39.83 and MAC address 52:54:00:49:9d:cf in network mk-ha-199780
	I1009 19:11:55.017692   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:11:55.021943   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:55.034196   28654 mustload.go:65] Loading cluster: ha-199780
	I1009 19:11:55.034432   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:11:55.034865   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:55.034912   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:55.049583   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40841
	I1009 19:11:55.050018   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:55.050467   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:55.050491   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:55.050776   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:55.050944   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:11:55.052331   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:55.052611   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:55.052643   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:55.066531   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43893
	I1009 19:11:55.066862   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:55.067348   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:55.067376   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:55.067659   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:55.067826   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:55.067945   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.83
	I1009 19:11:55.067956   28654 certs.go:194] generating shared ca certs ...
	I1009 19:11:55.067973   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.068103   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:11:55.068159   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:11:55.068171   28654 certs.go:256] generating profile certs ...
	I1009 19:11:55.068256   28654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:11:55.068286   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0
	I1009 19:11:55.068307   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.254]
	I1009 19:11:55.274614   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 ...
	I1009 19:11:55.274645   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0: {Name:mkea8c047205788ccead22201bc77c7190717cca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.274816   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0 ...
	I1009 19:11:55.274832   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0: {Name:mk98b6fcd80ec856f6c63ddb6177c8a08e2dbf7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:11:55.274920   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.f3e9b5b0 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:11:55.275082   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.f3e9b5b0 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:11:55.275255   28654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:11:55.275273   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:11:55.275291   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:11:55.275308   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:11:55.275327   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:11:55.275347   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:11:55.275366   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:11:55.275383   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:11:55.275401   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:11:55.275466   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:11:55.275511   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:11:55.275524   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:11:55.275558   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:11:55.275590   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:11:55.275622   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:11:55.275679   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:11:55.275720   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.275740   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.275758   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.275797   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:55.278862   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:55.279369   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:55.279395   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:55.279612   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:55.279780   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:55.279952   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:55.280049   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:55.351381   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:11:55.355961   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:11:55.367055   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:11:55.371613   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:11:55.382154   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:11:55.386133   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:11:55.395984   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:11:55.399714   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1009 19:11:55.409621   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:11:55.413853   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:11:55.423766   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:11:55.427525   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1009 19:11:55.437575   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:11:55.462624   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:11:55.485719   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:11:55.508128   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:11:55.530803   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 19:11:55.555486   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:11:55.580139   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:11:55.603207   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:11:55.626373   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:11:55.649676   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:11:55.673656   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:11:55.696721   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:11:55.712647   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:11:55.728611   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:11:55.744619   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1009 19:11:55.760726   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:11:55.776763   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1009 19:11:55.792315   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:11:55.807929   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:11:55.813442   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:11:55.823376   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.827581   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.827627   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:11:55.833072   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:11:55.842843   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:11:55.852649   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.856766   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.856802   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:11:55.862146   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:11:55.872016   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:11:55.881805   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.885859   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.885905   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:11:55.891246   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:11:55.901096   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:11:55.904965   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:11:55.905009   28654 kubeadm.go:934] updating node {m02 192.168.39.83 8443 v1.31.1 crio true true} ...
	I1009 19:11:55.905077   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.83
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:11:55.905098   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:11:55.905121   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:11:55.919709   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:11:55.919759   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:11:55.919801   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:55.929228   28654 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1009 19:11:55.929276   28654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1009 19:11:55.938319   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1009 19:11:55.938340   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:11:55.938391   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:11:55.938402   28654 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1009 19:11:55.938404   28654 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1009 19:11:55.942635   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1009 19:11:55.942660   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1009 19:11:57.241263   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:11:57.255221   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:11:57.255304   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:11:57.259158   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1009 19:11:57.259186   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1009 19:11:57.547794   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:11:57.547883   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:11:57.562384   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1009 19:11:57.562426   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1009 19:11:57.842477   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:11:57.852027   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 19:11:57.867591   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:11:57.883108   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:11:57.898843   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:11:57.902642   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:11:57.914959   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:11:58.028127   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:11:58.044965   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:11:58.045423   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:11:58.045473   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:11:58.059986   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I1009 19:11:58.060458   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:11:58.060917   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:11:58.060934   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:11:58.061238   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:11:58.061410   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:11:58.061538   28654 start.go:317] joinCluster: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:11:58.061653   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 19:11:58.061673   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:11:58.064589   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:58.064969   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:11:58.064994   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:11:58.065152   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:11:58.065308   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:11:58.065538   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:11:58.065661   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:11:58.210321   28654 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:11:58.210383   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kfwmel.rmtx9gjzbnc80w0m --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m02 --control-plane --apiserver-advertise-address=192.168.39.83 --apiserver-bind-port=8443"
	I1009 19:12:19.134246   28654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kfwmel.rmtx9gjzbnc80w0m --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m02 --control-plane --apiserver-advertise-address=192.168.39.83 --apiserver-bind-port=8443": (20.923839028s)
	I1009 19:12:19.134290   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 19:12:19.605010   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780-m02 minikube.k8s.io/updated_at=2024_10_09T19_12_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=false
	I1009 19:12:19.748442   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-199780-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1009 19:12:19.868185   28654 start.go:319] duration metric: took 21.806636434s to joinCluster
	I1009 19:12:19.868265   28654 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:12:19.868592   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:12:19.870842   28654 out.go:177] * Verifying Kubernetes components...
	I1009 19:12:19.872112   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:12:20.132051   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:12:20.184872   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:12:20.185127   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:12:20.185184   28654 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I1009 19:12:20.185366   28654 node_ready.go:35] waiting up to 6m0s for node "ha-199780-m02" to be "Ready" ...
	I1009 19:12:20.185447   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:20.185457   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:20.185464   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:20.185468   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:20.196121   28654 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1009 19:12:20.685641   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:20.685666   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:20.685677   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:20.685683   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:20.700948   28654 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1009 19:12:21.186360   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:21.186379   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:21.186386   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:21.186390   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:21.190077   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:21.686495   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:21.686523   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:21.686535   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:21.686542   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:21.689757   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:22.185915   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:22.185938   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:22.185949   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:22.185955   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:22.189220   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:22.189830   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:22.685885   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:22.685909   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:22.685925   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:22.685930   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:22.692565   28654 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 19:12:23.186131   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:23.186153   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:23.186163   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:23.186170   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:23.190703   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:23.685823   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:23.685851   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:23.685864   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:23.685874   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:23.689295   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:24.186259   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:24.186290   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:24.186302   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:24.186308   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:24.190419   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:24.190953   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:24.686386   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:24.686405   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:24.686412   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:24.686418   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:24.689349   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:25.186405   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:25.186431   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:25.186443   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:25.186448   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:25.189677   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:25.685894   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:25.685917   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:25.685930   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:25.685938   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:25.688721   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:26.185700   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:26.185718   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:26.185725   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:26.185729   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:26.189091   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:26.686200   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:26.686219   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:26.686227   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:26.686233   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:26.691177   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:26.691800   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:27.186166   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:27.186200   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:27.186216   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:27.186227   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:27.208799   28654 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1009 19:12:27.686569   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:27.686596   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:27.686606   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:27.686611   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:27.690120   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:28.186542   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:28.186562   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:28.186570   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:28.186574   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:28.189659   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:28.685814   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:28.685834   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:28.685842   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:28.685846   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:28.689015   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:29.185658   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:29.185692   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:29.185703   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:29.185708   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:29.188963   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:29.189656   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:29.686079   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:29.686104   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:29.686115   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:29.686119   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:29.689437   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:30.186344   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:30.186367   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:30.186378   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:30.186384   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:30.189946   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:30.685870   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:30.685896   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:30.685904   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:30.685909   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:30.689100   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:31.186316   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:31.186342   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:31.186351   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:31.186356   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:31.189992   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:31.190453   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:31.685857   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:31.685878   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:31.685886   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:31.685890   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:31.689411   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:32.186413   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:32.186439   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:32.186450   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:32.186457   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:32.189297   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:32.686105   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:32.686126   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:32.686134   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:32.686138   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:32.689698   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.185993   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:33.186015   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:33.186024   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:33.186028   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:33.189373   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.685932   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:33.685955   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:33.685963   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:33.685968   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:33.689670   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:33.690285   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:34.185640   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:34.185662   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:34.185670   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:34.185674   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:34.188694   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:34.686203   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:34.686223   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:34.686231   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:34.686235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:34.690146   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:35.185607   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:35.185628   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:35.185636   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:35.185640   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:35.188854   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:35.685726   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:35.685746   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:35.685759   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:35.685764   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:35.689172   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:36.186278   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:36.186301   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:36.186308   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:36.186312   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:36.189767   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:36.190519   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:36.685809   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:36.685841   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:36.685849   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:36.685853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:36.688923   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:37.185894   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:37.185920   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:37.185933   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:37.185940   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:37.189465   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:37.686197   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:37.686222   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:37.686230   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:37.686235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:37.689394   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.185922   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:38.185948   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:38.185956   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:38.185961   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:38.189255   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.685706   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:38.685729   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:38.685742   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:38.685751   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:38.689204   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:38.689971   28654 node_ready.go:53] node "ha-199780-m02" has status "Ready":"False"
	I1009 19:12:39.186413   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.186433   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.186447   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.186452   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.189522   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.190154   28654 node_ready.go:49] node "ha-199780-m02" has status "Ready":"True"
	I1009 19:12:39.190172   28654 node_ready.go:38] duration metric: took 19.004790985s for node "ha-199780-m02" to be "Ready" ...
	I1009 19:12:39.190183   28654 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:12:39.190256   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:39.190268   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.190277   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.190292   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.194625   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:39.201057   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.201129   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-r8lg7
	I1009 19:12:39.201137   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.201144   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.201149   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.203552   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.204277   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.204291   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.204298   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.204303   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.206434   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.207017   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.207033   28654 pod_ready.go:82] duration metric: took 5.954504ms for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.207041   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.207118   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v5k75
	I1009 19:12:39.207128   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.207139   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.207148   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.209367   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.210180   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.210198   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.210204   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.210207   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.212254   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.212911   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.212929   28654 pod_ready.go:82] duration metric: took 5.881939ms for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.212939   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.212996   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780
	I1009 19:12:39.213004   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.213010   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.213014   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.215519   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.216198   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.216212   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.216222   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.216228   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.218680   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.219274   28654 pod_ready.go:93] pod "etcd-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.219293   28654 pod_ready.go:82] duration metric: took 6.345815ms for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.219306   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.219361   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m02
	I1009 19:12:39.219370   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.219379   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.219388   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.222905   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.223852   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.223867   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.223874   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.223880   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.226122   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.226546   28654 pod_ready.go:93] pod "etcd-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.226559   28654 pod_ready.go:82] duration metric: took 7.244216ms for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.226571   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.386954   28654 request.go:632] Waited for 160.312334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:12:39.387019   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:12:39.387028   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.387041   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.387059   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.390052   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:39.587135   28654 request.go:632] Waited for 196.31885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.587196   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:39.587203   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.587211   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.587219   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.590448   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.591164   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.591183   28654 pod_ready.go:82] duration metric: took 364.606313ms for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.591192   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.787247   28654 request.go:632] Waited for 195.987261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:12:39.787328   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:12:39.787335   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.787346   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.787354   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.790620   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.986772   28654 request.go:632] Waited for 195.363358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.986825   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:39.986830   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:39.986837   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:39.986840   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:39.990003   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:39.990664   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:39.990682   28654 pod_ready.go:82] duration metric: took 399.483816ms for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:39.990691   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.186433   28654 request.go:632] Waited for 195.681011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:12:40.186513   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:12:40.186524   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.186535   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.186544   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.189683   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.386818   28654 request.go:632] Waited for 196.355604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:40.386887   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:40.386893   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.386900   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.386905   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.391133   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:40.391614   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:40.391638   28654 pod_ready.go:82] duration metric: took 400.93972ms for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.391651   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.586680   28654 request.go:632] Waited for 194.949325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:12:40.586737   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:12:40.586742   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.586750   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.586755   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.590444   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.786422   28654 request.go:632] Waited for 195.280915ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:40.786495   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:40.786501   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.786509   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.786513   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.790326   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:40.791006   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:40.791029   28654 pod_ready.go:82] duration metric: took 399.365639ms for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.791046   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:40.987070   28654 request.go:632] Waited for 195.933748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:12:40.987131   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:12:40.987136   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:40.987143   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:40.987147   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:40.990605   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.186624   28654 request.go:632] Waited for 195.268606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.186692   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.186704   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.186711   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.186715   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.189956   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.190470   28654 pod_ready.go:93] pod "kube-proxy-n8ffq" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.190489   28654 pod_ready.go:82] duration metric: took 399.435329ms for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.190501   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.386649   28654 request.go:632] Waited for 196.07336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:12:41.386701   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:12:41.386706   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.386713   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.386716   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.390032   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.587033   28654 request.go:632] Waited for 196.334104ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:41.587126   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:41.587138   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.587149   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.587167   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.590021   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:41.590641   28654 pod_ready.go:93] pod "kube-proxy-zfsq8" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.590663   28654 pod_ready.go:82] duration metric: took 400.153892ms for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.590678   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.786648   28654 request.go:632] Waited for 195.890444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:12:41.786701   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:12:41.786708   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.786719   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.786729   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.789369   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:41.987345   28654 request.go:632] Waited for 197.361828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.987411   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:12:41.987416   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:41.987424   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:41.987427   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:41.990745   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:41.991278   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:41.991294   28654 pod_ready.go:82] duration metric: took 400.607782ms for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:41.991303   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:42.187413   28654 request.go:632] Waited for 196.036626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:12:42.187472   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:12:42.187478   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.187488   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.187495   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.190480   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:42.386422   28654 request.go:632] Waited for 195.271897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:42.386476   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:12:42.386482   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.386489   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.386493   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.389175   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:12:42.389733   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:12:42.389754   28654 pod_ready.go:82] duration metric: took 398.44435ms for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:12:42.389768   28654 pod_ready.go:39] duration metric: took 3.199572136s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:12:42.389785   28654 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:12:42.389849   28654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:12:42.407811   28654 api_server.go:72] duration metric: took 22.539512335s to wait for apiserver process to appear ...
	I1009 19:12:42.407834   28654 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:12:42.407855   28654 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1009 19:12:42.414877   28654 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1009 19:12:42.414962   28654 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I1009 19:12:42.414974   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.414984   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.414991   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.416098   28654 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 19:12:42.416185   28654 api_server.go:141] control plane version: v1.31.1
	I1009 19:12:42.416202   28654 api_server.go:131] duration metric: took 8.360977ms to wait for apiserver health ...
	I1009 19:12:42.416212   28654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:12:42.587017   28654 request.go:632] Waited for 170.742751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.587127   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.587142   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.587151   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.587157   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.592323   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:12:42.596935   28654 system_pods.go:59] 17 kube-system pods found
	I1009 19:12:42.596960   28654 system_pods.go:61] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:12:42.596966   28654 system_pods.go:61] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:12:42.596971   28654 system_pods.go:61] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:12:42.596974   28654 system_pods.go:61] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:12:42.596977   28654 system_pods.go:61] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:12:42.596980   28654 system_pods.go:61] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:12:42.596983   28654 system_pods.go:61] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:12:42.596991   28654 system_pods.go:61] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:12:42.596995   28654 system_pods.go:61] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:12:42.597000   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:12:42.597004   28654 system_pods.go:61] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:12:42.597007   28654 system_pods.go:61] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:12:42.597011   28654 system_pods.go:61] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:12:42.597015   28654 system_pods.go:61] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:12:42.597018   28654 system_pods.go:61] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:12:42.597023   28654 system_pods.go:61] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:12:42.597026   28654 system_pods.go:61] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:12:42.597031   28654 system_pods.go:74] duration metric: took 180.813466ms to wait for pod list to return data ...
	I1009 19:12:42.597039   28654 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:12:42.787461   28654 request.go:632] Waited for 190.355387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:12:42.787510   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:12:42.787515   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.787523   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.787526   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.791707   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:42.791908   28654 default_sa.go:45] found service account: "default"
	I1009 19:12:42.791921   28654 default_sa.go:55] duration metric: took 194.876803ms for default service account to be created ...
	I1009 19:12:42.791929   28654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:12:42.987347   28654 request.go:632] Waited for 195.347718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.987402   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:12:42.987407   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:42.987415   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:42.987418   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:42.992125   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:12:42.996490   28654 system_pods.go:86] 17 kube-system pods found
	I1009 19:12:42.996520   28654 system_pods.go:89] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:12:42.996536   28654 system_pods.go:89] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:12:42.996541   28654 system_pods.go:89] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:12:42.996545   28654 system_pods.go:89] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:12:42.996552   28654 system_pods.go:89] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:12:42.996564   28654 system_pods.go:89] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:12:42.996567   28654 system_pods.go:89] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:12:42.996571   28654 system_pods.go:89] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:12:42.996576   28654 system_pods.go:89] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:12:42.996580   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:12:42.996583   28654 system_pods.go:89] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:12:42.996587   28654 system_pods.go:89] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:12:42.996591   28654 system_pods.go:89] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:12:42.996594   28654 system_pods.go:89] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:12:42.996598   28654 system_pods.go:89] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:12:42.996603   28654 system_pods.go:89] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:12:42.996605   28654 system_pods.go:89] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:12:42.996612   28654 system_pods.go:126] duration metric: took 204.678176ms to wait for k8s-apps to be running ...
	I1009 19:12:42.996621   28654 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:12:42.996661   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:12:43.012943   28654 system_svc.go:56] duration metric: took 16.312977ms WaitForService to wait for kubelet
	I1009 19:12:43.012964   28654 kubeadm.go:582] duration metric: took 23.14466791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:12:43.012979   28654 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:12:43.186683   28654 request.go:632] Waited for 173.643549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I1009 19:12:43.186731   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I1009 19:12:43.186737   28654 round_trippers.go:469] Request Headers:
	I1009 19:12:43.186744   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:12:43.186750   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:12:43.190743   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:12:43.191568   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:12:43.191597   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:12:43.191608   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:12:43.191612   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:12:43.191618   28654 node_conditions.go:105] duration metric: took 178.633815ms to run NodePressure ...
	I1009 19:12:43.191635   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:12:43.191663   28654 start.go:255] writing updated cluster config ...
	I1009 19:12:43.193878   28654 out.go:201] 
	I1009 19:12:43.195204   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:12:43.195296   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:12:43.196947   28654 out.go:177] * Starting "ha-199780-m03" control-plane node in "ha-199780" cluster
	I1009 19:12:43.198242   28654 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:12:43.198257   28654 cache.go:56] Caching tarball of preloaded images
	I1009 19:12:43.198354   28654 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:12:43.198368   28654 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:12:43.198453   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:12:43.198644   28654 start.go:360] acquireMachinesLock for ha-199780-m03: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:12:43.198693   28654 start.go:364] duration metric: took 30.243µs to acquireMachinesLock for "ha-199780-m03"
	I1009 19:12:43.198715   28654 start.go:93] Provisioning new machine with config: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:12:43.198839   28654 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1009 19:12:43.200292   28654 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 19:12:43.200365   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:12:43.200395   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:12:43.215501   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46413
	I1009 19:12:43.215883   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:12:43.216432   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:12:43.216461   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:12:43.216780   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:12:43.216973   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:12:43.217128   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:12:43.217269   28654 start.go:159] libmachine.API.Create for "ha-199780" (driver="kvm2")
	I1009 19:12:43.217296   28654 client.go:168] LocalClient.Create starting
	I1009 19:12:43.217327   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 19:12:43.217360   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:12:43.217379   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:12:43.217439   28654 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 19:12:43.217464   28654 main.go:141] libmachine: Decoding PEM data...
	I1009 19:12:43.217486   28654 main.go:141] libmachine: Parsing certificate...
	I1009 19:12:43.217518   28654 main.go:141] libmachine: Running pre-create checks...
	I1009 19:12:43.217529   28654 main.go:141] libmachine: (ha-199780-m03) Calling .PreCreateCheck
	I1009 19:12:43.217680   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:12:43.218031   28654 main.go:141] libmachine: Creating machine...
	I1009 19:12:43.218043   28654 main.go:141] libmachine: (ha-199780-m03) Calling .Create
	I1009 19:12:43.218158   28654 main.go:141] libmachine: (ha-199780-m03) Creating KVM machine...
	I1009 19:12:43.219370   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found existing default KVM network
	I1009 19:12:43.219545   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found existing private KVM network mk-ha-199780
	I1009 19:12:43.219670   28654 main.go:141] libmachine: (ha-199780-m03) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 ...
	I1009 19:12:43.219694   28654 main.go:141] libmachine: (ha-199780-m03) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 19:12:43.219770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.219647   29426 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:12:43.219839   28654 main.go:141] libmachine: (ha-199780-m03) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 19:12:43.456571   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.456478   29426 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa...
	I1009 19:12:43.637087   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.637007   29426 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/ha-199780-m03.rawdisk...
	I1009 19:12:43.637111   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Writing magic tar header
	I1009 19:12:43.637123   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Writing SSH key tar header
	I1009 19:12:43.637132   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:43.637111   29426 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 ...
	I1009 19:12:43.637237   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03
	I1009 19:12:43.637256   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03 (perms=drwx------)
	I1009 19:12:43.637263   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 19:12:43.637277   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 19:12:43.637285   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:12:43.637293   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 19:12:43.637301   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 19:12:43.637308   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home/jenkins
	I1009 19:12:43.637313   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Checking permissions on dir: /home
	I1009 19:12:43.637322   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Skipping /home - not owner
	I1009 19:12:43.637330   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 19:12:43.637338   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 19:12:43.637345   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 19:12:43.637355   28654 main.go:141] libmachine: (ha-199780-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 19:12:43.637364   28654 main.go:141] libmachine: (ha-199780-m03) Creating domain...
	I1009 19:12:43.638194   28654 main.go:141] libmachine: (ha-199780-m03) define libvirt domain using xml: 
	I1009 19:12:43.638216   28654 main.go:141] libmachine: (ha-199780-m03) <domain type='kvm'>
	I1009 19:12:43.638226   28654 main.go:141] libmachine: (ha-199780-m03)   <name>ha-199780-m03</name>
	I1009 19:12:43.638239   28654 main.go:141] libmachine: (ha-199780-m03)   <memory unit='MiB'>2200</memory>
	I1009 19:12:43.638251   28654 main.go:141] libmachine: (ha-199780-m03)   <vcpu>2</vcpu>
	I1009 19:12:43.638258   28654 main.go:141] libmachine: (ha-199780-m03)   <features>
	I1009 19:12:43.638266   28654 main.go:141] libmachine: (ha-199780-m03)     <acpi/>
	I1009 19:12:43.638275   28654 main.go:141] libmachine: (ha-199780-m03)     <apic/>
	I1009 19:12:43.638288   28654 main.go:141] libmachine: (ha-199780-m03)     <pae/>
	I1009 19:12:43.638296   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638304   28654 main.go:141] libmachine: (ha-199780-m03)   </features>
	I1009 19:12:43.638314   28654 main.go:141] libmachine: (ha-199780-m03)   <cpu mode='host-passthrough'>
	I1009 19:12:43.638338   28654 main.go:141] libmachine: (ha-199780-m03)   
	I1009 19:12:43.638360   28654 main.go:141] libmachine: (ha-199780-m03)   </cpu>
	I1009 19:12:43.638375   28654 main.go:141] libmachine: (ha-199780-m03)   <os>
	I1009 19:12:43.638386   28654 main.go:141] libmachine: (ha-199780-m03)     <type>hvm</type>
	I1009 19:12:43.638397   28654 main.go:141] libmachine: (ha-199780-m03)     <boot dev='cdrom'/>
	I1009 19:12:43.638406   28654 main.go:141] libmachine: (ha-199780-m03)     <boot dev='hd'/>
	I1009 19:12:43.638416   28654 main.go:141] libmachine: (ha-199780-m03)     <bootmenu enable='no'/>
	I1009 19:12:43.638425   28654 main.go:141] libmachine: (ha-199780-m03)   </os>
	I1009 19:12:43.638435   28654 main.go:141] libmachine: (ha-199780-m03)   <devices>
	I1009 19:12:43.638451   28654 main.go:141] libmachine: (ha-199780-m03)     <disk type='file' device='cdrom'>
	I1009 19:12:43.638468   28654 main.go:141] libmachine: (ha-199780-m03)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/boot2docker.iso'/>
	I1009 19:12:43.638480   28654 main.go:141] libmachine: (ha-199780-m03)       <target dev='hdc' bus='scsi'/>
	I1009 19:12:43.638491   28654 main.go:141] libmachine: (ha-199780-m03)       <readonly/>
	I1009 19:12:43.638498   28654 main.go:141] libmachine: (ha-199780-m03)     </disk>
	I1009 19:12:43.638511   28654 main.go:141] libmachine: (ha-199780-m03)     <disk type='file' device='disk'>
	I1009 19:12:43.638529   28654 main.go:141] libmachine: (ha-199780-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 19:12:43.638545   28654 main.go:141] libmachine: (ha-199780-m03)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/ha-199780-m03.rawdisk'/>
	I1009 19:12:43.638557   28654 main.go:141] libmachine: (ha-199780-m03)       <target dev='hda' bus='virtio'/>
	I1009 19:12:43.638566   28654 main.go:141] libmachine: (ha-199780-m03)     </disk>
	I1009 19:12:43.638575   28654 main.go:141] libmachine: (ha-199780-m03)     <interface type='network'>
	I1009 19:12:43.638585   28654 main.go:141] libmachine: (ha-199780-m03)       <source network='mk-ha-199780'/>
	I1009 19:12:43.638600   28654 main.go:141] libmachine: (ha-199780-m03)       <model type='virtio'/>
	I1009 19:12:43.638613   28654 main.go:141] libmachine: (ha-199780-m03)     </interface>
	I1009 19:12:43.638624   28654 main.go:141] libmachine: (ha-199780-m03)     <interface type='network'>
	I1009 19:12:43.638637   28654 main.go:141] libmachine: (ha-199780-m03)       <source network='default'/>
	I1009 19:12:43.638647   28654 main.go:141] libmachine: (ha-199780-m03)       <model type='virtio'/>
	I1009 19:12:43.638658   28654 main.go:141] libmachine: (ha-199780-m03)     </interface>
	I1009 19:12:43.638665   28654 main.go:141] libmachine: (ha-199780-m03)     <serial type='pty'>
	I1009 19:12:43.638685   28654 main.go:141] libmachine: (ha-199780-m03)       <target port='0'/>
	I1009 19:12:43.638701   28654 main.go:141] libmachine: (ha-199780-m03)     </serial>
	I1009 19:12:43.638713   28654 main.go:141] libmachine: (ha-199780-m03)     <console type='pty'>
	I1009 19:12:43.638724   28654 main.go:141] libmachine: (ha-199780-m03)       <target type='serial' port='0'/>
	I1009 19:12:43.638734   28654 main.go:141] libmachine: (ha-199780-m03)     </console>
	I1009 19:12:43.638742   28654 main.go:141] libmachine: (ha-199780-m03)     <rng model='virtio'>
	I1009 19:12:43.638760   28654 main.go:141] libmachine: (ha-199780-m03)       <backend model='random'>/dev/random</backend>
	I1009 19:12:43.638775   28654 main.go:141] libmachine: (ha-199780-m03)     </rng>
	I1009 19:12:43.638786   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638796   28654 main.go:141] libmachine: (ha-199780-m03)     
	I1009 19:12:43.638812   28654 main.go:141] libmachine: (ha-199780-m03)   </devices>
	I1009 19:12:43.638828   28654 main.go:141] libmachine: (ha-199780-m03) </domain>
	I1009 19:12:43.638836   28654 main.go:141] libmachine: (ha-199780-m03) 
	I1009 19:12:43.645429   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:1f:d1:3b in network default
	I1009 19:12:43.645983   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:43.646001   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring networks are active...
	I1009 19:12:43.646747   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring network default is active
	I1009 19:12:43.647149   28654 main.go:141] libmachine: (ha-199780-m03) Ensuring network mk-ha-199780 is active
	I1009 19:12:43.647523   28654 main.go:141] libmachine: (ha-199780-m03) Getting domain xml...
	I1009 19:12:43.648287   28654 main.go:141] libmachine: (ha-199780-m03) Creating domain...
	I1009 19:12:44.847549   28654 main.go:141] libmachine: (ha-199780-m03) Waiting to get IP...
	I1009 19:12:44.848392   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:44.848787   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:44.848829   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:44.848770   29426 retry.go:31] will retry after 229.997293ms: waiting for machine to come up
	I1009 19:12:45.079971   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.080455   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.080486   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.080421   29426 retry.go:31] will retry after 304.992826ms: waiting for machine to come up
	I1009 19:12:45.386902   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.387362   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.387386   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.387322   29426 retry.go:31] will retry after 327.958718ms: waiting for machine to come up
	I1009 19:12:45.716733   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:45.717214   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:45.717239   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:45.717174   29426 retry.go:31] will retry after 508.576077ms: waiting for machine to come up
	I1009 19:12:46.227904   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:46.228327   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:46.228353   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:46.228287   29426 retry.go:31] will retry after 585.555609ms: waiting for machine to come up
	I1009 19:12:46.814896   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:46.815296   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:46.815326   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:46.815257   29426 retry.go:31] will retry after 940.877771ms: waiting for machine to come up
	I1009 19:12:47.757334   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:47.757733   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:47.757767   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:47.757680   29426 retry.go:31] will retry after 1.078987913s: waiting for machine to come up
	I1009 19:12:48.838156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:48.838584   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:48.838612   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:48.838534   29426 retry.go:31] will retry after 1.204337562s: waiting for machine to come up
	I1009 19:12:50.044036   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:50.044425   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:50.044447   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:50.044387   29426 retry.go:31] will retry after 1.424565558s: waiting for machine to come up
	I1009 19:12:51.470825   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:51.471291   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:51.471328   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:51.471250   29426 retry.go:31] will retry after 1.95975676s: waiting for machine to come up
	I1009 19:12:53.432604   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:53.433116   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:53.433142   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:53.433070   29426 retry.go:31] will retry after 2.780245822s: waiting for machine to come up
	I1009 19:12:56.216025   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:56.216374   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:56.216395   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:56.216337   29426 retry.go:31] will retry after 3.28653641s: waiting for machine to come up
	I1009 19:12:59.504791   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:12:59.505156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:12:59.505184   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:12:59.505128   29426 retry.go:31] will retry after 4.186849932s: waiting for machine to come up
	I1009 19:13:03.693337   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:03.693747   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find current IP address of domain ha-199780-m03 in network mk-ha-199780
	I1009 19:13:03.693770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | I1009 19:13:03.693703   29426 retry.go:31] will retry after 5.146937605s: waiting for machine to come up
	I1009 19:13:08.842460   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.842868   28654 main.go:141] libmachine: (ha-199780-m03) Found IP for machine: 192.168.39.84
	I1009 19:13:08.842887   28654 main.go:141] libmachine: (ha-199780-m03) Reserving static IP address...
	I1009 19:13:08.842896   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has current primary IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.843320   28654 main.go:141] libmachine: (ha-199780-m03) DBG | unable to find host DHCP lease matching {name: "ha-199780-m03", mac: "52:54:00:15:92:44", ip: "192.168.39.84"} in network mk-ha-199780
	I1009 19:13:08.913543   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Getting to WaitForSSH function...
	I1009 19:13:08.913573   28654 main.go:141] libmachine: (ha-199780-m03) Reserved static IP address: 192.168.39.84
	I1009 19:13:08.913586   28654 main.go:141] libmachine: (ha-199780-m03) Waiting for SSH to be available...
	I1009 19:13:08.916270   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.916658   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:15:92:44}
	I1009 19:13:08.916682   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:08.916805   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using SSH client type: external
	I1009 19:13:08.916829   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa (-rw-------)
	I1009 19:13:08.916873   28654 main.go:141] libmachine: (ha-199780-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 19:13:08.916898   28654 main.go:141] libmachine: (ha-199780-m03) DBG | About to run SSH command:
	I1009 19:13:08.916914   28654 main.go:141] libmachine: (ha-199780-m03) DBG | exit 0
	I1009 19:13:09.046941   28654 main.go:141] libmachine: (ha-199780-m03) DBG | SSH cmd err, output: <nil>: 
	I1009 19:13:09.047218   28654 main.go:141] libmachine: (ha-199780-m03) KVM machine creation complete!
	I1009 19:13:09.047540   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:13:09.048076   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:09.048290   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:09.048435   28654 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 19:13:09.048449   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetState
	I1009 19:13:09.049768   28654 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 19:13:09.049784   28654 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 19:13:09.049792   28654 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 19:13:09.049800   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.051899   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.052232   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.052256   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.052390   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.052558   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.052690   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.052792   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.052919   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.053134   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.053146   28654 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 19:13:09.162161   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:13:09.162193   28654 main.go:141] libmachine: Detecting the provisioner...
	I1009 19:13:09.162204   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.165282   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.165740   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.165770   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.165998   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.166189   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.166372   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.166511   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.166658   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.166820   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.166830   28654 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 19:13:09.279803   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 19:13:09.279876   28654 main.go:141] libmachine: found compatible host: buildroot
	I1009 19:13:09.279888   28654 main.go:141] libmachine: Provisioning with buildroot...
	I1009 19:13:09.279896   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.280130   28654 buildroot.go:166] provisioning hostname "ha-199780-m03"
	I1009 19:13:09.280155   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.280355   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.282543   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.282879   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.282903   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.283031   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.283188   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.283335   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.283479   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.283637   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.283800   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.283813   28654 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780-m03 && echo "ha-199780-m03" | sudo tee /etc/hostname
	I1009 19:13:09.410249   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780-m03
	
	I1009 19:13:09.410286   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.413156   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.413573   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.413597   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.413831   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.414036   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.414189   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.414350   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.414484   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.414653   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.414676   28654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:13:09.536419   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:13:09.536443   28654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:13:09.536456   28654 buildroot.go:174] setting up certificates
	I1009 19:13:09.536466   28654 provision.go:84] configureAuth start
	I1009 19:13:09.536474   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetMachineName
	I1009 19:13:09.536766   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:09.539383   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.539742   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.539769   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.539905   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.542068   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.542398   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.542433   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.542583   28654 provision.go:143] copyHostCerts
	I1009 19:13:09.542606   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:13:09.542633   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:13:09.542642   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:13:09.542706   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:13:09.542776   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:13:09.542794   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:13:09.542798   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:13:09.542825   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:13:09.542870   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:13:09.542886   28654 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:13:09.542891   28654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:13:09.542910   28654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:13:09.542956   28654 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780-m03 san=[127.0.0.1 192.168.39.84 ha-199780-m03 localhost minikube]
	I1009 19:13:09.606712   28654 provision.go:177] copyRemoteCerts
	I1009 19:13:09.606761   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:13:09.606781   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.609303   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.609661   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.609689   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.609868   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.610022   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.610145   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.610298   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:09.696779   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:13:09.696841   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:13:09.720751   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:13:09.720811   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:13:09.744059   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:13:09.744114   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 19:13:09.767833   28654 provision.go:87] duration metric: took 231.356763ms to configureAuth
	I1009 19:13:09.767867   28654 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:13:09.768111   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:09.768195   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:09.770602   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.770927   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:09.770956   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:09.771124   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:09.771314   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.771473   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:09.771621   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:09.771780   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:09.771973   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:09.772002   28654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:13:09.999632   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:13:09.999662   28654 main.go:141] libmachine: Checking connection to Docker...
	I1009 19:13:09.999673   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetURL
	I1009 19:13:10.001043   28654 main.go:141] libmachine: (ha-199780-m03) DBG | Using libvirt version 6000000
	I1009 19:13:10.002982   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.003339   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.003364   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.003485   28654 main.go:141] libmachine: Docker is up and running!
	I1009 19:13:10.003499   28654 main.go:141] libmachine: Reticulating splines...
	I1009 19:13:10.003506   28654 client.go:171] duration metric: took 26.786200346s to LocalClient.Create
	I1009 19:13:10.003528   28654 start.go:167] duration metric: took 26.786259048s to libmachine.API.Create "ha-199780"
	I1009 19:13:10.003541   28654 start.go:293] postStartSetup for "ha-199780-m03" (driver="kvm2")
	I1009 19:13:10.003557   28654 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:13:10.003580   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.003751   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:13:10.003777   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.005954   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.006305   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.006342   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.006472   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.006621   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.006781   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.006914   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.097042   28654 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:13:10.101538   28654 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:13:10.101559   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:13:10.101628   28654 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:13:10.101716   28654 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:13:10.101727   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:13:10.101831   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:13:10.111544   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:13:10.138321   28654 start.go:296] duration metric: took 134.764482ms for postStartSetup
	I1009 19:13:10.138362   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetConfigRaw
	I1009 19:13:10.138886   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:10.141464   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.141752   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.141798   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.142045   28654 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:13:10.142239   28654 start.go:128] duration metric: took 26.94338984s to createHost
	I1009 19:13:10.142260   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.144573   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.144860   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.144895   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.145048   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.145233   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.145397   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.145561   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.145727   28654 main.go:141] libmachine: Using SSH client type: native
	I1009 19:13:10.145915   28654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1009 19:13:10.145928   28654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:13:10.259958   28654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501190.239755663
	
	I1009 19:13:10.259981   28654 fix.go:216] guest clock: 1728501190.239755663
	I1009 19:13:10.259990   28654 fix.go:229] Guest: 2024-10-09 19:13:10.239755663 +0000 UTC Remote: 2024-10-09 19:13:10.142249873 +0000 UTC m=+147.747443556 (delta=97.50579ms)
	I1009 19:13:10.260009   28654 fix.go:200] guest clock delta is within tolerance: 97.50579ms
	I1009 19:13:10.260014   28654 start.go:83] releasing machines lock for "ha-199780-m03", held for 27.061310572s
	I1009 19:13:10.260031   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.260248   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:10.262692   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.263042   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.263090   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.265368   28654 out.go:177] * Found network options:
	I1009 19:13:10.266603   28654 out.go:177]   - NO_PROXY=192.168.39.114,192.168.39.83
	W1009 19:13:10.267719   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 19:13:10.267740   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:13:10.267752   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268176   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268354   28654 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:13:10.268457   28654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:13:10.268495   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	W1009 19:13:10.268522   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 19:13:10.268539   28654 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 19:13:10.268607   28654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:13:10.268629   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:13:10.271001   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271378   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.271413   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271433   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.271563   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.271675   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.271760   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.271841   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.271883   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:10.271905   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:10.272050   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:13:10.272201   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:13:10.272349   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:13:10.272499   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:13:10.509806   28654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:13:10.515665   28654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:13:10.515723   28654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:13:10.534296   28654 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 19:13:10.534319   28654 start.go:495] detecting cgroup driver to use...
	I1009 19:13:10.534372   28654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:13:10.550041   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:13:10.563633   28654 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:13:10.563683   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:13:10.577637   28654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:13:10.592588   28654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:13:10.712305   28654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:13:10.879292   28654 docker.go:233] disabling docker service ...
	I1009 19:13:10.879381   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:13:10.894134   28654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:13:10.907059   28654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:13:11.025068   28654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:13:11.146057   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:13:11.160573   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:13:11.181994   28654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:13:11.182045   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.191765   28654 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:13:11.191812   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.201883   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.212073   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.222390   28654 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:13:11.232857   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.243298   28654 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.262217   28654 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:13:11.272906   28654 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:13:11.282747   28654 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 19:13:11.282797   28654 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 19:13:11.296609   28654 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:13:11.306096   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:11.423441   28654 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:13:11.515740   28654 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:13:11.515821   28654 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:13:11.520647   28654 start.go:563] Will wait 60s for crictl version
	I1009 19:13:11.520700   28654 ssh_runner.go:195] Run: which crictl
	I1009 19:13:11.524288   28654 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:13:11.564050   28654 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:13:11.564119   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:13:11.592463   28654 ssh_runner.go:195] Run: crio --version
	I1009 19:13:11.620536   28654 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:13:11.622484   28654 out.go:177]   - env NO_PROXY=192.168.39.114
	I1009 19:13:11.623769   28654 out.go:177]   - env NO_PROXY=192.168.39.114,192.168.39.83
	I1009 19:13:11.624794   28654 main.go:141] libmachine: (ha-199780-m03) Calling .GetIP
	I1009 19:13:11.627494   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:11.627836   28654 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:13:11.627861   28654 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:13:11.628050   28654 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:13:11.632057   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:13:11.644307   28654 mustload.go:65] Loading cluster: ha-199780
	I1009 19:13:11.644526   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:11.644823   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:11.644864   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:11.660098   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35321
	I1009 19:13:11.660500   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:11.660929   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:11.660963   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:11.661312   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:11.661490   28654 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:13:11.662965   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:13:11.663268   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:11.663304   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:11.677584   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37189
	I1009 19:13:11.678002   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:11.678412   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:11.678433   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:11.678716   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:11.678874   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:13:11.678992   28654 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.84
	I1009 19:13:11.679002   28654 certs.go:194] generating shared ca certs ...
	I1009 19:13:11.679014   28654 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.679142   28654 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:13:11.679180   28654 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:13:11.679190   28654 certs.go:256] generating profile certs ...
	I1009 19:13:11.679253   28654 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:13:11.679275   28654 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8
	I1009 19:13:11.679293   28654 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.84 192.168.39.254]
	I1009 19:13:11.751003   28654 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 ...
	I1009 19:13:11.751029   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8: {Name:mkf155e8357b65010528843e053f2a71f20ad105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.751190   28654 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8 ...
	I1009 19:13:11.751202   28654 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8: {Name:mk6ff6d5eec7167bd850e69dc06edb50691eb6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:13:11.751267   28654 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.b4489fb8 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:13:11.751393   28654 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.b4489fb8 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:13:11.751509   28654 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:13:11.751523   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:13:11.751535   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:13:11.751550   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:13:11.751563   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:13:11.751576   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:13:11.751588   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:13:11.751600   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:13:11.771159   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:13:11.771229   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:13:11.771259   28654 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:13:11.771269   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:13:11.771293   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:13:11.771314   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:13:11.771335   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:13:11.771370   28654 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:13:11.771395   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:13:11.771408   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:11.771420   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:13:11.771451   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:13:11.774438   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:11.774845   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:13:11.774865   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:11.775017   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:13:11.775204   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:13:11.775350   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:13:11.775478   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:13:11.851359   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1009 19:13:11.856664   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1009 19:13:11.868123   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1009 19:13:11.875260   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1009 19:13:11.887341   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1009 19:13:11.891724   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1009 19:13:11.902332   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1009 19:13:11.906621   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1009 19:13:11.916908   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1009 19:13:11.921562   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1009 19:13:11.931584   28654 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1009 19:13:11.935971   28654 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1009 19:13:11.946941   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:13:11.972757   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:13:11.996080   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:13:12.019624   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:13:12.042711   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1009 19:13:12.067239   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:13:12.094118   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:13:12.120234   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:13:12.143055   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:13:12.165868   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:13:12.188853   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:13:12.211293   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1009 19:13:12.227623   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1009 19:13:12.243623   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1009 19:13:12.260811   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1009 19:13:12.278131   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1009 19:13:12.295237   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1009 19:13:12.312441   28654 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1009 19:13:12.328516   28654 ssh_runner.go:195] Run: openssl version
	I1009 19:13:12.334428   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:13:12.345201   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.349589   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.349627   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:13:12.355741   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:13:12.366097   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:13:12.376756   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.381423   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.381474   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:13:12.387265   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:13:12.398550   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:13:12.410065   28654 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.414879   28654 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.414939   28654 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:13:12.420521   28654 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:13:12.431459   28654 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:13:12.435599   28654 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 19:13:12.435653   28654 kubeadm.go:934] updating node {m03 192.168.39.84 8443 v1.31.1 crio true true} ...
	I1009 19:13:12.435745   28654 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:13:12.435776   28654 kube-vip.go:115] generating kube-vip config ...
	I1009 19:13:12.435816   28654 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:13:12.450815   28654 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:13:12.450880   28654 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:13:12.450927   28654 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:13:12.462732   28654 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1009 19:13:12.462797   28654 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1009 19:13:12.473333   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1009 19:13:12.473358   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:13:12.473356   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1009 19:13:12.473375   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:13:12.473392   28654 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1009 19:13:12.473419   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1009 19:13:12.473431   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1009 19:13:12.473433   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:13:12.484568   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1009 19:13:12.484600   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1009 19:13:12.496090   28654 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:13:12.496156   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1009 19:13:12.496169   28654 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1009 19:13:12.496179   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1009 19:13:12.547231   28654 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1009 19:13:12.547271   28654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1009 19:13:13.298298   28654 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1009 19:13:13.308347   28654 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 19:13:13.325500   28654 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:13:13.341701   28654 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:13:13.358009   28654 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:13:13.361852   28654 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 19:13:13.374963   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:13.498686   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:13:13.518977   28654 host.go:66] Checking if "ha-199780" exists ...
	I1009 19:13:13.519473   28654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:13:13.519531   28654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:13:13.538200   28654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45109
	I1009 19:13:13.538624   28654 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:13:13.539117   28654 main.go:141] libmachine: Using API Version  1
	I1009 19:13:13.539147   28654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:13:13.539481   28654 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:13:13.539662   28654 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:13:13.539788   28654 start.go:317] joinCluster: &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fal
se istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:13:13.539943   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 19:13:13.539967   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:13:13.542836   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:13.543274   28654 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:13:13.543303   28654 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:13:13.543418   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:13:13.543577   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:13:13.543722   28654 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:13:13.543861   28654 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:13:13.700075   28654 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:13:13.700122   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d41j1t.hzhz2w4cpck4u6sv --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443"
	I1009 19:13:36.009706   28654 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d41j1t.hzhz2w4cpck4u6sv --discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-199780-m03 --control-plane --apiserver-advertise-address=192.168.39.84 --apiserver-bind-port=8443": (22.309560416s)
	I1009 19:13:36.009741   28654 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 19:13:36.574647   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-199780-m03 minikube.k8s.io/updated_at=2024_10_09T19_13_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=ha-199780 minikube.k8s.io/primary=false
	I1009 19:13:36.718344   28654 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-199780-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1009 19:13:36.828582   28654 start.go:319] duration metric: took 23.288789983s to joinCluster
	I1009 19:13:36.828663   28654 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 19:13:36.828971   28654 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:36.830104   28654 out.go:177] * Verifying Kubernetes components...
	I1009 19:13:36.831350   28654 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:13:37.149519   28654 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:13:37.192508   28654 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:13:37.192892   28654 kapi.go:59] client config for ha-199780: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.crt", KeyFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key", CAFile:"/home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x242f780), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1009 19:13:37.192972   28654 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.114:8443
	I1009 19:13:37.193248   28654 node_ready.go:35] waiting up to 6m0s for node "ha-199780-m03" to be "Ready" ...
	I1009 19:13:37.193328   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:37.193338   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:37.193350   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:37.193359   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:37.197001   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:37.693747   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:37.693768   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:37.693780   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:37.693785   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:37.697648   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:38.193891   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:38.193913   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:38.193924   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:38.193929   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:38.197274   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:38.693429   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:38.693457   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:38.693469   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:38.693474   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:38.696864   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:39.193488   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:39.193508   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:39.193514   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:39.193519   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:39.196227   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:39.196768   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:39.694269   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:39.694294   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:39.694306   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:39.694313   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:39.697293   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:40.193909   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:40.193938   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:40.193948   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:40.193953   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:40.197226   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:40.693770   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:40.693793   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:40.693804   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:40.693809   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:40.697070   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:41.194260   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:41.194282   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:41.194291   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:41.194295   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:41.197138   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:41.197715   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:41.694049   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:41.694075   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:41.694087   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:41.694094   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:41.697134   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:42.194287   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:42.194311   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:42.194321   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:42.194327   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:42.197589   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:42.693552   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:42.693571   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:42.693581   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:42.693588   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:42.696963   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:43.193761   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:43.193786   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:43.193798   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:43.193806   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:43.197438   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:43.198158   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:43.693694   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:43.693716   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:43.693724   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:43.693728   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:43.697267   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:44.193683   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:44.193704   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:44.193711   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:44.193715   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:44.197056   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:44.693897   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:44.693918   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:44.693928   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:44.693933   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:44.696914   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:45.193775   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:45.193795   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:45.193803   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:45.193807   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:45.197164   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:45.694421   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:45.694444   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:45.694455   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:45.694461   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:45.697506   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:45.698052   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:46.193428   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:46.193455   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:46.193486   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:46.193492   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:46.197151   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:46.693979   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:46.693997   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:46.694013   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:46.694017   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:46.697611   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:47.193578   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:47.193600   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:47.193607   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:47.193611   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:47.197105   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:47.693781   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:47.693802   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:47.693813   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:47.693817   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:47.696934   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:48.194335   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:48.194358   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:48.194365   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:48.194368   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:48.198434   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:13:48.199180   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:48.693737   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:48.693758   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:48.693768   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:48.693773   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:48.697344   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:49.193432   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:49.193451   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:49.193459   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:49.193463   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:49.196304   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:49.694364   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:49.694385   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:49.694396   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:49.694403   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:49.697486   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.193397   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:50.193418   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:50.193431   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:50.193435   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:50.197076   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.693831   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:50.693856   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:50.693867   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:50.693873   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:50.697369   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:50.698284   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:51.194258   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:51.194282   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:51.194289   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:51.194294   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:51.197449   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:51.694317   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:51.694339   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:51.694350   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:51.694356   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:51.698049   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:52.194018   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:52.194043   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:52.194052   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:52.194061   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:52.197494   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:52.694202   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:52.694224   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:52.694232   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:52.694236   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:52.697227   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:53.193702   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:53.193722   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:53.193729   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:53.193733   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:53.196923   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:53.197555   28654 node_ready.go:53] node "ha-199780-m03" has status "Ready":"False"
	I1009 19:13:53.694135   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:53.694158   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:53.694166   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:53.694172   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:53.697390   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:54.193409   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:54.193427   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.193439   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.193443   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.195968   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.693832   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:54.693853   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.693861   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.693866   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.696718   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.697386   28654 node_ready.go:49] node "ha-199780-m03" has status "Ready":"True"
	I1009 19:13:54.697405   28654 node_ready.go:38] duration metric: took 17.504141075s for node "ha-199780-m03" to be "Ready" ...
	I1009 19:13:54.697413   28654 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:13:54.697463   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:13:54.697471   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.697479   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.697484   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.703461   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:13:54.710054   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.710118   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-r8lg7
	I1009 19:13:54.710126   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.710133   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.710136   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.712863   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.713585   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.713602   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.713609   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.713613   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.715857   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.716501   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.716519   28654 pod_ready.go:82] duration metric: took 6.443501ms for pod "coredns-7c65d6cfc9-r8lg7" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.716529   28654 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.716578   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-v5k75
	I1009 19:13:54.716586   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.716593   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.716599   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.718834   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.719475   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.719490   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.719499   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.719505   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.721592   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.722022   28654 pod_ready.go:93] pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.722036   28654 pod_ready.go:82] duration metric: took 5.49901ms for pod "coredns-7c65d6cfc9-v5k75" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.722045   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.722092   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780
	I1009 19:13:54.722102   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.722111   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.722117   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.724132   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.724537   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:54.724549   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.724558   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.724564   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.726416   28654 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 19:13:54.726760   28654 pod_ready.go:93] pod "etcd-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.726774   28654 pod_ready.go:82] duration metric: took 4.721439ms for pod "etcd-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.726783   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.726829   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m02
	I1009 19:13:54.726838   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.726847   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.726853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.728868   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.729481   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:54.729499   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.729510   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.729515   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.731574   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:54.732095   28654 pod_ready.go:93] pod "etcd-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:54.732112   28654 pod_ready.go:82] duration metric: took 5.322203ms for pod "etcd-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.732123   28654 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:54.894472   28654 request.go:632] Waited for 162.298544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m03
	I1009 19:13:54.894602   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/etcd-ha-199780-m03
	I1009 19:13:54.894612   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:54.894619   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:54.894623   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:54.897741   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.094188   28654 request.go:632] Waited for 195.683908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:55.094240   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:55.094246   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.094253   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.094258   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.097407   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.098074   28654 pod_ready.go:93] pod "etcd-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.098090   28654 pod_ready.go:82] duration metric: took 365.959261ms for pod "etcd-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.098111   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.294211   28654 request.go:632] Waited for 196.026886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:13:55.294264   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780
	I1009 19:13:55.294270   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.294277   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.294281   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.297814   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.494347   28654 request.go:632] Waited for 195.288987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:55.494396   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:55.494400   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.494409   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.494414   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.497640   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.498264   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.498282   28654 pod_ready.go:82] duration metric: took 400.159789ms for pod "kube-apiserver-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.498295   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.694371   28654 request.go:632] Waited for 196.007868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:13:55.694438   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m02
	I1009 19:13:55.694444   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.694452   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.694457   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.697453   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:55.894821   28654 request.go:632] Waited for 196.365606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:55.894877   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:55.894894   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:55.894903   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:55.894908   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:55.898105   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:55.898641   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:55.898656   28654 pod_ready.go:82] duration metric: took 400.354565ms for pod "kube-apiserver-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:55.898665   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.094875   28654 request.go:632] Waited for 196.142376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m03
	I1009 19:13:56.094943   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-199780-m03
	I1009 19:13:56.094953   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.094962   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.094969   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.098488   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.294812   28654 request.go:632] Waited for 195.339632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:56.294879   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:56.294886   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.294897   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.294905   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.298371   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.299243   28654 pod_ready.go:93] pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:56.299268   28654 pod_ready.go:82] duration metric: took 400.59742ms for pod "kube-apiserver-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.299278   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.494432   28654 request.go:632] Waited for 195.083743ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:13:56.494487   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780
	I1009 19:13:56.494493   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.494503   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.494508   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.498203   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.694515   28654 request.go:632] Waited for 195.651266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:56.694569   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:56.694574   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.694582   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.694589   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.697903   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:56.698503   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:56.698524   28654 pod_ready.go:82] duration metric: took 399.235411ms for pod "kube-controller-manager-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.698534   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:56.894604   28654 request.go:632] Waited for 196.010295ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:13:56.894690   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m02
	I1009 19:13:56.894699   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:56.894709   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:56.894725   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:56.897698   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:57.094771   28654 request.go:632] Waited for 196.347164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:57.094830   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:57.094837   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.094846   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.094853   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.097915   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.098466   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.098483   28654 pod_ready.go:82] duration metric: took 399.942607ms for pod "kube-controller-manager-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.098496   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.294694   28654 request.go:632] Waited for 196.107304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m03
	I1009 19:13:57.294760   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-199780-m03
	I1009 19:13:57.294768   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.294778   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.294791   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.298281   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.493859   28654 request.go:632] Waited for 194.862003ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.493928   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.493933   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.493941   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.493945   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.497771   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.498530   28654 pod_ready.go:93] pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.498546   28654 pod_ready.go:82] duration metric: took 400.036948ms for pod "kube-controller-manager-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.498556   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cltcd" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.694138   28654 request.go:632] Waited for 195.506846ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cltcd
	I1009 19:13:57.694198   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cltcd
	I1009 19:13:57.694204   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.694211   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.694217   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.698240   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:13:57.894301   28654 request.go:632] Waited for 195.370676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.894370   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:57.894377   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:57.894391   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:57.894398   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:57.897846   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:57.898728   28654 pod_ready.go:93] pod "kube-proxy-cltcd" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:57.898745   28654 pod_ready.go:82] duration metric: took 400.184495ms for pod "kube-proxy-cltcd" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:57.898756   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.094244   28654 request.go:632] Waited for 195.417272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:13:58.094320   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n8ffq
	I1009 19:13:58.094332   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.094339   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.094343   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.098070   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.294156   28654 request.go:632] Waited for 195.371857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:58.294219   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:58.294226   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.294237   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.294245   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.297391   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.297856   28654 pod_ready.go:93] pod "kube-proxy-n8ffq" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:58.297872   28654 pod_ready.go:82] duration metric: took 399.106499ms for pod "kube-proxy-n8ffq" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.297884   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.493870   28654 request.go:632] Waited for 195.913549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:13:58.493922   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zfsq8
	I1009 19:13:58.493927   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.493937   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.493944   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.497117   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.694489   28654 request.go:632] Waited for 196.566825ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:58.694545   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:58.694552   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.694563   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.694568   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.697679   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:58.698297   28654 pod_ready.go:93] pod "kube-proxy-zfsq8" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:58.698312   28654 pod_ready.go:82] duration metric: took 400.419475ms for pod "kube-proxy-zfsq8" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.698322   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:58.894499   28654 request.go:632] Waited for 196.088891ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:13:58.894585   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780
	I1009 19:13:58.894592   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:58.894603   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:58.894613   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:58.897964   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.094228   28654 request.go:632] Waited for 195.366071ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:59.094310   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780
	I1009 19:13:59.094322   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.094333   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.094342   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.097557   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.098186   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.098207   28654 pod_ready.go:82] duration metric: took 399.878488ms for pod "kube-scheduler-ha-199780" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.098219   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.294278   28654 request.go:632] Waited for 195.983419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:13:59.294332   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m02
	I1009 19:13:59.294338   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.294345   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.294350   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.297821   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:13:59.493975   28654 request.go:632] Waited for 195.208037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:59.494031   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m02
	I1009 19:13:59.494036   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.494044   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.494049   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.501563   28654 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1009 19:13:59.502080   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.502097   28654 pod_ready.go:82] duration metric: took 403.868133ms for pod "kube-scheduler-ha-199780-m02" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.502106   28654 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.694192   28654 request.go:632] Waited for 192.028751ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m03
	I1009 19:13:59.694247   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-199780-m03
	I1009 19:13:59.694253   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.694260   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.694264   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.697180   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:59.894169   28654 request.go:632] Waited for 196.350026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:59.894218   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes/ha-199780-m03
	I1009 19:13:59.894223   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.894230   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.894235   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.897240   28654 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 19:13:59.897806   28654 pod_ready.go:93] pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace has status "Ready":"True"
	I1009 19:13:59.897823   28654 pod_ready.go:82] duration metric: took 395.71123ms for pod "kube-scheduler-ha-199780-m03" in "kube-system" namespace to be "Ready" ...
	I1009 19:13:59.897835   28654 pod_ready.go:39] duration metric: took 5.200413633s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 19:13:59.897849   28654 api_server.go:52] waiting for apiserver process to appear ...
	I1009 19:13:59.897900   28654 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:13:59.914617   28654 api_server.go:72] duration metric: took 23.08591673s to wait for apiserver process to appear ...
	I1009 19:13:59.914639   28654 api_server.go:88] waiting for apiserver healthz status ...
	I1009 19:13:59.914655   28654 api_server.go:253] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1009 19:13:59.918628   28654 api_server.go:279] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1009 19:13:59.918679   28654 round_trippers.go:463] GET https://192.168.39.114:8443/version
	I1009 19:13:59.918686   28654 round_trippers.go:469] Request Headers:
	I1009 19:13:59.918696   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:13:59.918706   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:13:59.919571   28654 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1009 19:13:59.919687   28654 api_server.go:141] control plane version: v1.31.1
	I1009 19:13:59.919708   28654 api_server.go:131] duration metric: took 5.063855ms to wait for apiserver health ...
	I1009 19:13:59.919716   28654 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 19:14:00.094827   28654 request.go:632] Waited for 175.023163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.094896   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.094904   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.094915   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.094925   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.100594   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:14:00.107658   28654 system_pods.go:59] 24 kube-system pods found
	I1009 19:14:00.107684   28654 system_pods.go:61] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:14:00.107689   28654 system_pods.go:61] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:14:00.107692   28654 system_pods.go:61] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:14:00.107695   28654 system_pods.go:61] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:14:00.107699   28654 system_pods.go:61] "etcd-ha-199780-m03" [b174991f-8f2c-44b1-8646-5ee7533a9a67] Running
	I1009 19:14:00.107702   28654 system_pods.go:61] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:14:00.107706   28654 system_pods.go:61] "kindnet-b8ff2" [6f00c0c0-aae5-4bfa-aa9f-30f7c79fa343] Running
	I1009 19:14:00.107711   28654 system_pods.go:61] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:14:00.107716   28654 system_pods.go:61] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:14:00.107721   28654 system_pods.go:61] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:14:00.107725   28654 system_pods.go:61] "kube-apiserver-ha-199780-m03" [9d8e77cd-db60-4329-91ea-012315e5beaf] Running
	I1009 19:14:00.107733   28654 system_pods.go:61] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:14:00.107738   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:14:00.107747   28654 system_pods.go:61] "kube-controller-manager-ha-199780-m03" [36743602-a58a-4171-88c2-9a79af012f26] Running
	I1009 19:14:00.107754   28654 system_pods.go:61] "kube-proxy-cltcd" [1dad9ac4-00d4-497e-8d9b-3e20a5c35c10] Running
	I1009 19:14:00.107758   28654 system_pods.go:61] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:14:00.107765   28654 system_pods.go:61] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:14:00.107770   28654 system_pods.go:61] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:14:00.107777   28654 system_pods.go:61] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:14:00.107783   28654 system_pods.go:61] "kube-scheduler-ha-199780-m03" [b687bb2e-b6eb-4c17-9762-537dd28919ff] Running
	I1009 19:14:00.107790   28654 system_pods.go:61] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:14:00.107795   28654 system_pods.go:61] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:14:00.107802   28654 system_pods.go:61] "kube-vip-ha-199780-m03" [6fb0f84c-1cc8-4539-a879-4a8fe8bc3eb6] Running
	I1009 19:14:00.107808   28654 system_pods.go:61] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:14:00.107818   28654 system_pods.go:74] duration metric: took 188.095908ms to wait for pod list to return data ...
	I1009 19:14:00.107830   28654 default_sa.go:34] waiting for default service account to be created ...
	I1009 19:14:00.294248   28654 request.go:632] Waited for 186.335259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:14:00.294301   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/default/serviceaccounts
	I1009 19:14:00.294308   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.294318   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.294323   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.298434   28654 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 19:14:00.298601   28654 default_sa.go:45] found service account: "default"
	I1009 19:14:00.298618   28654 default_sa.go:55] duration metric: took 190.779244ms for default service account to be created ...
	I1009 19:14:00.298632   28654 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 19:14:00.493990   28654 request.go:632] Waited for 195.280768ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.494052   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/namespaces/kube-system/pods
	I1009 19:14:00.494059   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.494069   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.494077   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.499571   28654 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1009 19:14:00.506443   28654 system_pods.go:86] 24 kube-system pods found
	I1009 19:14:00.506469   28654 system_pods.go:89] "coredns-7c65d6cfc9-r8lg7" [57280df1-97c4-4a11-ab5f-71e52d6f5ebe] Running
	I1009 19:14:00.506474   28654 system_pods.go:89] "coredns-7c65d6cfc9-v5k75" [9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7] Running
	I1009 19:14:00.506478   28654 system_pods.go:89] "etcd-ha-199780" [97bbc639-8ac1-48c0-92a7-6049bb10a0ef] Running
	I1009 19:14:00.506482   28654 system_pods.go:89] "etcd-ha-199780-m02" [7561ef0e-457a-413e-ad02-13393167b214] Running
	I1009 19:14:00.506486   28654 system_pods.go:89] "etcd-ha-199780-m03" [b174991f-8f2c-44b1-8646-5ee7533a9a67] Running
	I1009 19:14:00.506490   28654 system_pods.go:89] "kindnet-2gjpk" [cb845072-bb38-485a-8b25-5d930021781f] Running
	I1009 19:14:00.506495   28654 system_pods.go:89] "kindnet-b8ff2" [6f00c0c0-aae5-4bfa-aa9f-30f7c79fa343] Running
	I1009 19:14:00.506503   28654 system_pods.go:89] "kindnet-pwr8x" [5f7dea68-5587-4e23-b614-06ee808fb88a] Running
	I1009 19:14:00.506511   28654 system_pods.go:89] "kube-apiserver-ha-199780" [b49765ad-b5ef-409b-8ea9-bb4434093f93] Running
	I1009 19:14:00.506518   28654 system_pods.go:89] "kube-apiserver-ha-199780-m02" [b669fa4f-0c97-4e7d-a2a7-0be61f0950d7] Running
	I1009 19:14:00.506527   28654 system_pods.go:89] "kube-apiserver-ha-199780-m03" [9d8e77cd-db60-4329-91ea-012315e5beaf] Running
	I1009 19:14:00.506539   28654 system_pods.go:89] "kube-controller-manager-ha-199780" [98134671-eb08-47ab-b7b4-a6b8fa9564b4] Running
	I1009 19:14:00.506548   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m02" [39b81940-9353-4ef0-8920-c0de9cdf7d1c] Running
	I1009 19:14:00.506555   28654 system_pods.go:89] "kube-controller-manager-ha-199780-m03" [36743602-a58a-4171-88c2-9a79af012f26] Running
	I1009 19:14:00.506558   28654 system_pods.go:89] "kube-proxy-cltcd" [1dad9ac4-00d4-497e-8d9b-3e20a5c35c10] Running
	I1009 19:14:00.506564   28654 system_pods.go:89] "kube-proxy-n8ffq" [83deff6c-dc09-49e3-9228-3edd039efd13] Running
	I1009 19:14:00.506569   28654 system_pods.go:89] "kube-proxy-zfsq8" [0092f8eb-0997-481d-9e7a-73b78b13ceca] Running
	I1009 19:14:00.506574   28654 system_pods.go:89] "kube-scheduler-ha-199780" [23499fbf-3678-4c99-8d26-540b1d3d7da3] Running
	I1009 19:14:00.506580   28654 system_pods.go:89] "kube-scheduler-ha-199780-m02" [328561e0-5914-41e0-9531-4d81eabc9d40] Running
	I1009 19:14:00.506585   28654 system_pods.go:89] "kube-scheduler-ha-199780-m03" [b687bb2e-b6eb-4c17-9762-537dd28919ff] Running
	I1009 19:14:00.506590   28654 system_pods.go:89] "kube-vip-ha-199780" [855ce49e-9a86-4af5-b6e9-92afc0fc662f] Running
	I1009 19:14:00.506598   28654 system_pods.go:89] "kube-vip-ha-199780-m02" [5d2ea70d-a8ff-411b-9382-c00bc48b2306] Running
	I1009 19:14:00.506602   28654 system_pods.go:89] "kube-vip-ha-199780-m03" [6fb0f84c-1cc8-4539-a879-4a8fe8bc3eb6] Running
	I1009 19:14:00.506610   28654 system_pods.go:89] "storage-provisioner" [d6e67924-f183-441c-b92e-1124d3582e0f] Running
	I1009 19:14:00.506619   28654 system_pods.go:126] duration metric: took 207.977758ms to wait for k8s-apps to be running ...
	I1009 19:14:00.506632   28654 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 19:14:00.506681   28654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:14:00.521903   28654 system_svc.go:56] duration metric: took 15.266021ms WaitForService to wait for kubelet
	I1009 19:14:00.521926   28654 kubeadm.go:582] duration metric: took 23.693227633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:14:00.521941   28654 node_conditions.go:102] verifying NodePressure condition ...
	I1009 19:14:00.694326   28654 request.go:632] Waited for 172.306887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.114:8443/api/v1/nodes
	I1009 19:14:00.694392   28654 round_trippers.go:463] GET https://192.168.39.114:8443/api/v1/nodes
	I1009 19:14:00.694398   28654 round_trippers.go:469] Request Headers:
	I1009 19:14:00.694405   28654 round_trippers.go:473]     Accept: application/json, */*
	I1009 19:14:00.694409   28654 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1009 19:14:00.698331   28654 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 19:14:00.699548   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699566   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699577   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699581   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699584   28654 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 19:14:00.699587   28654 node_conditions.go:123] node cpu capacity is 2
	I1009 19:14:00.699591   28654 node_conditions.go:105] duration metric: took 177.645761ms to run NodePressure ...
	I1009 19:14:00.699601   28654 start.go:241] waiting for startup goroutines ...
	I1009 19:14:00.699621   28654 start.go:255] writing updated cluster config ...
	I1009 19:14:00.699890   28654 ssh_runner.go:195] Run: rm -f paused
	I1009 19:14:00.750344   28654 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 19:14:00.752481   28654 out.go:177] * Done! kubectl is now configured to use "ha-199780" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.954090837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501477954065232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6541ba08-14a6-4be4-9d56-634967a3618c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.954613817Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e2ab218-3a34-40f6-86ec-03570e00bc2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.954664436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e2ab218-3a34-40f6-86ec-03570e00bc2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.954962007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e2ab218-3a34-40f6-86ec-03570e00bc2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.992529439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4459a326-d854-4b8a-9219-592cd9de9ef0 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.992604883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4459a326-d854-4b8a-9219-592cd9de9ef0 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.993390761Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=191799f9-6084-461b-84b6-428626b92c69 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.993894291Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501477993867958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=191799f9-6084-461b-84b6-428626b92c69 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.994489435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8fffa63c-cd69-4240-8b6d-483410b09e7f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.994542667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8fffa63c-cd69-4240-8b6d-483410b09e7f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:57 ha-199780 crio[667]: time="2024-10-09 19:17:57.994809823Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8fffa63c-cd69-4240-8b6d-483410b09e7f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.038662241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbde9cbc-432a-4de4-9a34-4009cdef8693 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.038735594Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbde9cbc-432a-4de4-9a34-4009cdef8693 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.040156017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3176183c-f496-46f3-81b1-c4deed2a57f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.041586065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501478041550521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3176183c-f496-46f3-81b1-c4deed2a57f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.044190346Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d99f8145-0e3d-422c-a539-cff53afe9070 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.044292137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d99f8145-0e3d-422c-a539-cff53afe9070 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.044818581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d99f8145-0e3d-422c-a539-cff53afe9070 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.086965704Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8bc84b9c-7223-42f3-a659-ee22e2a46261 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.087066618Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8bc84b9c-7223-42f3-a659-ee22e2a46261 name=/runtime.v1.RuntimeService/Version
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.088025479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a61d5de-0033-4fb5-a118-37d5ee3c1b12 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.088586762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501478088561522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a61d5de-0033-4fb5-a118-37d5ee3c1b12 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.089045146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2dae399c-b7d6-4d0a-9f93-21927336b3f0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.089095473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2dae399c-b7d6-4d0a-9f93-21927336b3f0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 19:17:58 ha-199780 crio[667]: time="2024-10-09 19:17:58.089328576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0ea2f43f1a79f6684a79e05e8c6c51a86af28c8988570c4056e20948be744681,PodSandboxId:4ee23da4cac603cc5f419b280a7379ed5c010b71ba0d38083b7001bc9c396dbd,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728501246045285134,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-9j59h,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b7d64762-33ba-44cd-84a4-889f31052f02,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431,PodSandboxId:085e585069bd9bc632a0c569d35a8d13233f457cc56898ea4c97a15ff5edc742,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105920098069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-r8lg7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57280df1-97c4-4a11-ab5f-71e52d6f5ebe,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72,PodSandboxId:31a68dbf07563932f008335818b3ba61926096303a0942cc5662f1e38f141c28,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728501105891861453,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-v5k75,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
9989ccc5-3cc3-4e60-a3ad-de4b5e3433c7,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec6c52f12ef1b63221a2144068919b0dbbe27e62362befd122660d3f97a92f89,PodSandboxId:fe10d9898f15c78ae24bc72987b4f6ee7e1580cff49b9ac398538e6222e34295,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1728501105797305419,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6e67924-f183-441c-b92e-1124d3582e0f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff,PodSandboxId:574f1065ffc9247507da9ed8c4c895298c9ffaab619a63885a0d0ffbb19e68cf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728501093903848945,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-2gjpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb845072-bb38-485a-8b25-5d930021781f,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46,PodSandboxId:893da030028badc07203e27871565f6795ba6f8ddae3ea6c5a420087e2e09e0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172850108
8355919076,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-n8ffq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83deff6c-dc09-49e3-9228-3edd039efd13,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378,PodSandboxId:f43a5a99f755d2e118f9836600cd002d36271381bef62026649772c45309aa52,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17285010797
46588410,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16307ec1db5d0a8b66fcf930dbe494c6,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d,PodSandboxId:1c04b2a2ff60ee9017cac4818fb0941595469194db11ef4727f00211d3303bef,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728501076697644563,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29985e334d0764bfc8b43247626b302f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf,PodSandboxId:7304e21bfd53890ae0fa2cf122ac1b01d60d4f18b9310193a23dbfc9f549d7f3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728501076680510939,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cd73d8e3a80578f2a827ade7a95c7a5,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef,PodSandboxId:a31ef18f5a4750a827a0ac485148c51fb76c89bf3322991d642ae9c9580929ee,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728501076621397992,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca8e402d5b8e10be28e81e4fe15655bd,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f,PodSandboxId:4e472f9c0008c8155e1fd6a169b3d6d53333440b40d8cd630078e9374bd2f172,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728501076579187885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.na
me: kube-scheduler-ha-199780,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c6f60d4ba1678b77f9b0f0e75429eb75,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2dae399c-b7d6-4d0a-9f93-21927336b3f0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ea2f43f1a79f       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4ee23da4cac60       busybox-7dff88458-9j59h
	22a50af75d092       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   085e585069bd9       coredns-7c65d6cfc9-r8lg7
	35a77197ba833       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   31a68dbf07563       coredns-7c65d6cfc9-v5k75
	ec6c52f12ef1b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   fe10d9898f15c       storage-provisioner
	aa6f941b511ee       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   574f1065ffc92       kindnet-2gjpk
	e72e7a03ebf12       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   893da030028ba       kube-proxy-n8ffq
	5e66ef287f9b9       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   f43a5a99f755d       kube-vip-ha-199780
	297d9ba8730bd       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   1c04b2a2ff60e       kube-apiserver-ha-199780
	88b0c31651177       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   7304e21bfd538       kube-controller-manager-ha-199780
	ce5525ec371c7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   a31ef18f5a475       etcd-ha-199780
	02b6fe12544b4       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   4e472f9c0008c       kube-scheduler-ha-199780
	
	
	==> coredns [22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431] <==
	[INFO] 10.244.2.2:60800 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001355455s
	[INFO] 10.244.2.2:51592 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001524757s
	[INFO] 10.244.0.4:56643 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000117626s
	[INFO] 10.244.0.4:59083 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001918015s
	[INFO] 10.244.1.2:50050 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00020734s
	[INFO] 10.244.1.2:42588 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154546s
	[INFO] 10.244.2.2:53843 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001710102s
	[INFO] 10.244.2.2:41845 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146416s
	[INFO] 10.244.2.2:36609 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000234089s
	[INFO] 10.244.0.4:46267 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001770158s
	[INFO] 10.244.0.4:50439 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000087554s
	[INFO] 10.244.0.4:34970 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127814s
	[INFO] 10.244.0.4:56896 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001173975s
	[INFO] 10.244.0.4:49966 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000151676s
	[INFO] 10.244.1.2:42996 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014083s
	[INFO] 10.244.1.2:44506 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088434s
	[INFO] 10.244.1.2:49086 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070298s
	[INFO] 10.244.2.2:50808 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197102s
	[INFO] 10.244.0.4:46671 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019106s
	[INFO] 10.244.0.4:55369 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070793s
	[INFO] 10.244.1.2:55579 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00053279s
	[INFO] 10.244.1.2:48281 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00017096s
	[INFO] 10.244.2.2:38416 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179419s
	[INFO] 10.244.2.2:37087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001697s
	[INFO] 10.244.0.4:45764 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105979s
	
	
	==> coredns [35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72] <==
	[INFO] 10.244.1.2:49567 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00017247s
	[INFO] 10.244.1.2:46716 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.012636722s
	[INFO] 10.244.1.2:55598 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000179363s
	[INFO] 10.244.1.2:47319 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137976s
	[INFO] 10.244.2.2:41489 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184478s
	[INFO] 10.244.2.2:55951 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000222614s
	[INFO] 10.244.2.2:48627 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015294s
	[INFO] 10.244.2.2:39644 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0012309s
	[INFO] 10.244.2.2:40477 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089525s
	[INFO] 10.244.0.4:43949 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131355s
	[INFO] 10.244.0.4:36372 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000136676s
	[INFO] 10.244.0.4:46637 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067852s
	[INFO] 10.244.1.2:51170 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178464s
	[INFO] 10.244.2.2:34724 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178092s
	[INFO] 10.244.2.2:51704 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000113596s
	[INFO] 10.244.2.2:58856 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000114468s
	[INFO] 10.244.0.4:46411 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103548s
	[INFO] 10.244.0.4:56515 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097616s
	[INFO] 10.244.1.2:46439 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144476s
	[INFO] 10.244.1.2:55946 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000169556s
	[INFO] 10.244.2.2:59005 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000136307s
	[INFO] 10.244.2.2:36778 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000074325s
	[INFO] 10.244.0.4:35520 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000216466s
	[INFO] 10.244.0.4:37146 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000092067s
	[INFO] 10.244.0.4:38648 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00006473s
	
	
	==> describe nodes <==
	Name:               ha-199780
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T19_11_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:11:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:14:27 +0000   Wed, 09 Oct 2024 19:11:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    ha-199780
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8b350a04d4e4876ae4d16443fff45f4
	  System UUID:                f8b350a0-4d4e-4876-ae4d-16443fff45f4
	  Boot ID:                    933ad8fe-c793-4abe-b675-8fc9d8bb0df7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-9j59h              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 coredns-7c65d6cfc9-r8lg7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m31s
	  kube-system                 coredns-7c65d6cfc9-v5k75             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m31s
	  kube-system                 etcd-ha-199780                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m35s
	  kube-system                 kindnet-2gjpk                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m31s
	  kube-system                 kube-apiserver-ha-199780             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-controller-manager-ha-199780    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-proxy-n8ffq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-scheduler-ha-199780             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-vip-ha-199780                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m29s  kube-proxy       
	  Normal  Starting                 6m36s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m35s  kubelet          Node ha-199780 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s  kubelet          Node ha-199780 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s  kubelet          Node ha-199780 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m32s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	  Normal  NodeReady                6m13s  kubelet          Node ha-199780 status is now: NodeReady
	  Normal  RegisteredNode           5m33s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	  Normal  RegisteredNode           4m16s  node-controller  Node ha-199780 event: Registered Node ha-199780 in Controller
	
	
	Name:               ha-199780-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_12_19_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:12:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:15:10 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Wed, 09 Oct 2024 19:14:20 +0000   Wed, 09 Oct 2024 19:15:52 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.83
	  Hostname:    ha-199780-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d9c79bf2f124101a095ed4ba0ce88eb
	  System UUID:                8d9c79bf-2f12-4101-a095-ed4ba0ce88eb
	  Boot ID:                    5dd46771-2617-4b89-b6af-8b5fb9f8968b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-6v84n                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-199780-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m39s
	  kube-system                 kindnet-pwr8x                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m41s
	  kube-system                 kube-apiserver-ha-199780-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-ha-199780-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-proxy-zfsq8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  kube-system                 kube-scheduler-ha-199780-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-vip-ha-199780-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  Starting                 5m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m41s (x8 over 5m41s)  kubelet          Node ha-199780-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m41s (x8 over 5m41s)  kubelet          Node ha-199780-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m41s (x7 over 5m41s)  kubelet          Node ha-199780-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-199780-m02 event: Registered Node ha-199780-m02 in Controller
	  Normal  NodeNotReady             2m6s                   node-controller  Node ha-199780-m02 status is now: NodeNotReady
	
	
	Name:               ha-199780-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_13_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:13:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:14:34 +0000   Wed, 09 Oct 2024 19:13:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    ha-199780-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 eebc1909fc264048999cb603a9af6ce3
	  System UUID:                eebc1909-fc26-4048-999c-b603a9af6ce3
	  Boot ID:                    b15e1b77-82c5-4af5-a3d4-20b2860c5033
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8946j                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 etcd-ha-199780-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m23s
	  kube-system                 kindnet-b8ff2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m25s
	  kube-system                 kube-apiserver-ha-199780-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-controller-manager-ha-199780-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-proxy-cltcd                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-ha-199780-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-vip-ha-199780-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m25s (x8 over 4m25s)  kubelet          Node ha-199780-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m25s (x8 over 4m25s)  kubelet          Node ha-199780-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m25s (x7 over 4m25s)  kubelet          Node ha-199780-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m23s                  node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-199780-m03 event: Registered Node ha-199780-m03 in Controller
	
	
	Name:               ha-199780-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-199780-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=ha-199780
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_09T19_14_39_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 19:14:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-199780-m04
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:17:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:15:10 +0000   Wed, 09 Oct 2024 19:14:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.124
	  Hostname:    ha-199780-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 781e482090944bd998625225909c9e80
	  System UUID:                781e4820-9094-4bd9-9862-5225909c9e80
	  Boot ID:                    12a0f26b-3a10-4a3c-a52b-9cbc57a77f21
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-24ftv       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m19s
	  kube-system                 kube-proxy-m4z2s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m19s (x2 over 3m20s)  kubelet          Node ha-199780-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m19s (x2 over 3m20s)  kubelet          Node ha-199780-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m19s (x2 over 3m20s)  kubelet          Node ha-199780-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m18s                  node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-199780-m04 event: Registered Node ha-199780-m04 in Controller
	  Normal  NodeReady                2m59s                  kubelet          Node ha-199780-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 9 19:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050153] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040118] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.841254] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.479681] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588103] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 9 19:11] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.067225] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062889] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.160511] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.147234] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.288221] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +3.950259] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +4.382176] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.060168] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.347615] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.082493] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.436773] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.719462] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 9 19:12] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef] <==
	{"level":"warn","ts":"2024-10-09T19:17:58.213377Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.224030Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.229888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.239942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.324378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.366971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.372016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.374295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.378275Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.381193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.390547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.399367Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.406698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.410797Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.413896Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.421976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.424607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.430091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.438650Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.442728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.446179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.460893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.472977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.485163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-09T19:17:58.524049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"7df1350fafd42bce","from":"7df1350fafd42bce","remote-peer-id":"f466fee41a82c4a2","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:17:58 up 7 min,  0 users,  load average: 0.38, 0.36, 0.19
	Linux ha-199780 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff] <==
	I1009 19:17:25.108116       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:35.098534       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:35.098583       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:35.098861       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:35.098893       1 main.go:300] handling current node
	I1009 19:17:35.098905       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:35.098910       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:35.099056       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:35.099076       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:45.106531       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:45.106579       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:45.106833       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:45.106857       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:45.106999       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:45.107020       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	I1009 19:17:45.107136       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:45.107162       1 main.go:300] handling current node
	I1009 19:17:55.106758       1 main.go:296] Handling node with IPs: map[192.168.39.114:{}]
	I1009 19:17:55.106966       1 main.go:300] handling current node
	I1009 19:17:55.107054       1 main.go:296] Handling node with IPs: map[192.168.39.83:{}]
	I1009 19:17:55.107090       1 main.go:323] Node ha-199780-m02 has CIDR [10.244.1.0/24] 
	I1009 19:17:55.107629       1 main.go:296] Handling node with IPs: map[192.168.39.84:{}]
	I1009 19:17:55.107682       1 main.go:323] Node ha-199780-m03 has CIDR [10.244.2.0/24] 
	I1009 19:17:55.108809       1 main.go:296] Handling node with IPs: map[192.168.39.124:{}]
	I1009 19:17:55.108888       1 main.go:323] Node ha-199780-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d] <==
	I1009 19:11:21.668889       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 19:11:21.770460       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 19:11:21.781866       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.114]
	I1009 19:11:21.782961       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 19:11:21.787948       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 19:11:22.068030       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 19:11:22.927751       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 19:11:22.944470       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 19:11:23.089040       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 19:11:27.267149       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1009 19:11:27.777277       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1009 19:14:07.172312       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48556: use of closed network connection
	E1009 19:14:07.353387       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48566: use of closed network connection
	E1009 19:14:07.545234       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48574: use of closed network connection
	E1009 19:14:07.734543       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48582: use of closed network connection
	E1009 19:14:07.929888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48590: use of closed network connection
	E1009 19:14:08.100628       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48610: use of closed network connection
	E1009 19:14:08.280738       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48618: use of closed network connection
	E1009 19:14:08.453709       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48636: use of closed network connection
	E1009 19:14:08.625372       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48648: use of closed network connection
	E1009 19:14:08.913070       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48688: use of closed network connection
	E1009 19:14:09.077842       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48702: use of closed network connection
	E1009 19:14:09.252280       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48730: use of closed network connection
	E1009 19:14:09.427983       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48748: use of closed network connection
	E1009 19:14:09.597172       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:48774: use of closed network connection
	
	
	==> kube-controller-manager [88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf] <==
	I1009 19:14:39.219907       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-199780-m04" podCIDRs=["10.244.3.0/24"]
	I1009 19:14:39.220731       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.221061       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.241490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.355995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:39.770947       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:40.508613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:42.009348       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:42.009820       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-199780-m04"
	I1009 19:14:42.092487       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:43.021323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:43.490581       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:49.589213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:59.213778       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:14:59.213909       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-199780-m04"
	I1009 19:14:59.228331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:00.446970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:10.142919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m04"
	I1009 19:15:52.044073       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-199780-m04"
	I1009 19:15:52.044690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:52.073336       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:52.197476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.479755ms"
	I1009 19:15:52.197580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="56.944µs"
	I1009 19:15:53.092490       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	I1009 19:15:57.298894       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-199780-m02"
	
	
	==> kube-proxy [e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 19:11:28.707293       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 19:11:28.725677       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.114"]
	E1009 19:11:28.725782       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 19:11:28.757070       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 19:11:28.757115       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 19:11:28.757143       1 server_linux.go:169] "Using iptables Proxier"
	I1009 19:11:28.759907       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 19:11:28.760502       1 server.go:483] "Version info" version="v1.31.1"
	I1009 19:11:28.760531       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 19:11:28.763071       1 config.go:199] "Starting service config controller"
	I1009 19:11:28.763270       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 19:11:28.763554       1 config.go:105] "Starting endpoint slice config controller"
	I1009 19:11:28.763583       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 19:11:28.764395       1 config.go:328] "Starting node config controller"
	I1009 19:11:28.764485       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 19:11:28.864003       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 19:11:28.864032       1 shared_informer.go:320] Caches are synced for service config
	I1009 19:11:28.864635       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f] <==
	W1009 19:11:21.020523       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.020653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.034179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.034272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.151254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 19:11:21.151392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.213273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 19:11:21.213327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.215782       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 19:11:21.217186       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.224009       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 19:11:21.224287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 19:11:21.233925       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 19:11:21.234510       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 19:11:21.254121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 19:11:21.254998       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1009 19:11:24.360718       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 19:14:39.271772       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-v6wc7\": pod kube-proxy-v6wc7 is already assigned to node \"ha-199780-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-v6wc7" node="ha-199780-m04"
	E1009 19:14:39.274796       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod d0c6f382-7a34-4281-922e-ded9d878bec1(kube-system/kube-proxy-v6wc7) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-v6wc7"
	E1009 19:14:39.274892       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v6wc7\": pod kube-proxy-v6wc7 is already assigned to node \"ha-199780-m04\"" pod="kube-system/kube-proxy-v6wc7"
	I1009 19:14:39.274974       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v6wc7" node="ha-199780-m04"
	E1009 19:14:39.274639       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-24ftv\": pod kindnet-24ftv is already assigned to node \"ha-199780-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-24ftv" node="ha-199780-m04"
	E1009 19:14:39.277781       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 67dc91f7-39c8-4a82-843c-629f28c633ce(kube-system/kindnet-24ftv) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-24ftv"
	E1009 19:14:39.277909       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-24ftv\": pod kindnet-24ftv is already assigned to node \"ha-199780-m04\"" pod="kube-system/kindnet-24ftv"
	I1009 19:14:39.278018       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-24ftv" node="ha-199780-m04"
	
	
	==> kubelet <==
	Oct 09 19:16:23 ha-199780 kubelet[1323]: E1009 19:16:23.169875    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501383169575669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:23 ha-199780 kubelet[1323]: E1009 19:16:23.169902    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501383169575669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:33 ha-199780 kubelet[1323]: E1009 19:16:33.171614    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501393171084690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:33 ha-199780 kubelet[1323]: E1009 19:16:33.171869    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501393171084690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:43 ha-199780 kubelet[1323]: E1009 19:16:43.174108    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501403173783019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:43 ha-199780 kubelet[1323]: E1009 19:16:43.174391    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501403173783019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:53 ha-199780 kubelet[1323]: E1009 19:16:53.177556    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501413177111466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:16:53 ha-199780 kubelet[1323]: E1009 19:16:53.177590    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501413177111466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:03 ha-199780 kubelet[1323]: E1009 19:17:03.179697    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501423179388594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:03 ha-199780 kubelet[1323]: E1009 19:17:03.179743    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501423179388594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:13 ha-199780 kubelet[1323]: E1009 19:17:13.181290    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501433180839998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:13 ha-199780 kubelet[1323]: E1009 19:17:13.181685    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501433180839998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.046503    1323 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 19:17:23 ha-199780 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 19:17:23 ha-199780 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.183478    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501443183171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:23 ha-199780 kubelet[1323]: E1009 19:17:23.183519    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501443183171298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:33 ha-199780 kubelet[1323]: E1009 19:17:33.185325    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501453184930973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:33 ha-199780 kubelet[1323]: E1009 19:17:33.186043    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501453184930973,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:43 ha-199780 kubelet[1323]: E1009 19:17:43.188281    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501463187979357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:43 ha-199780 kubelet[1323]: E1009 19:17:43.188327    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501463187979357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:53 ha-199780 kubelet[1323]: E1009 19:17:53.189953    1323 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501473189587167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:17:53 ha-199780 kubelet[1323]: E1009 19:17:53.190006    1323 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728501473189587167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-199780 -n ha-199780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-199780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (400.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-199780 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-199780 -v=7 --alsologtostderr
E1009 19:19:51.612813   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:19:51.908525   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-199780 -v=7 --alsologtostderr: exit status 82 (2m1.937726126s)

                                                
                                                
-- stdout --
	* Stopping node "ha-199780-m04"  ...
	* Stopping node "ha-199780-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:17:59.548454   34391 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:17:59.548592   34391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:17:59.548602   34391 out.go:358] Setting ErrFile to fd 2...
	I1009 19:17:59.548608   34391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:17:59.548816   34391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:17:59.549058   34391 out.go:352] Setting JSON to false
	I1009 19:17:59.549157   34391 mustload.go:65] Loading cluster: ha-199780
	I1009 19:17:59.549564   34391 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:17:59.549666   34391 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:17:59.549856   34391 mustload.go:65] Loading cluster: ha-199780
	I1009 19:17:59.550009   34391 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:17:59.550070   34391 stop.go:39] StopHost: ha-199780-m04
	I1009 19:17:59.550452   34391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:17:59.550505   34391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:17:59.565194   34391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40335
	I1009 19:17:59.565672   34391 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:17:59.566238   34391 main.go:141] libmachine: Using API Version  1
	I1009 19:17:59.566263   34391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:17:59.566564   34391 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:17:59.569042   34391 out.go:177] * Stopping node "ha-199780-m04"  ...
	I1009 19:17:59.570132   34391 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1009 19:17:59.570155   34391 main.go:141] libmachine: (ha-199780-m04) Calling .DriverName
	I1009 19:17:59.570387   34391 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1009 19:17:59.570412   34391 main.go:141] libmachine: (ha-199780-m04) Calling .GetSSHHostname
	I1009 19:17:59.572938   34391 main.go:141] libmachine: (ha-199780-m04) DBG | domain ha-199780-m04 has defined MAC address 52:54:00:56:11:1f in network mk-ha-199780
	I1009 19:17:59.573353   34391 main.go:141] libmachine: (ha-199780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:11:1f", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:14:25 +0000 UTC Type:0 Mac:52:54:00:56:11:1f Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:ha-199780-m04 Clientid:01:52:54:00:56:11:1f}
	I1009 19:17:59.573381   34391 main.go:141] libmachine: (ha-199780-m04) DBG | domain ha-199780-m04 has defined IP address 192.168.39.124 and MAC address 52:54:00:56:11:1f in network mk-ha-199780
	I1009 19:17:59.573515   34391 main.go:141] libmachine: (ha-199780-m04) Calling .GetSSHPort
	I1009 19:17:59.573643   34391 main.go:141] libmachine: (ha-199780-m04) Calling .GetSSHKeyPath
	I1009 19:17:59.573768   34391 main.go:141] libmachine: (ha-199780-m04) Calling .GetSSHUsername
	I1009 19:17:59.573906   34391 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m04/id_rsa Username:docker}
	I1009 19:17:59.664312   34391 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1009 19:17:59.718299   34391 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1009 19:17:59.772572   34391 main.go:141] libmachine: Stopping "ha-199780-m04"...
	I1009 19:17:59.772611   34391 main.go:141] libmachine: (ha-199780-m04) Calling .GetState
	I1009 19:17:59.774202   34391 main.go:141] libmachine: (ha-199780-m04) Calling .Stop
	I1009 19:17:59.777445   34391 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 0/120
	I1009 19:18:01.029047   34391 main.go:141] libmachine: (ha-199780-m04) Calling .GetState
	I1009 19:18:01.030295   34391 main.go:141] libmachine: Machine "ha-199780-m04" was stopped.
	I1009 19:18:01.030313   34391 stop.go:75] duration metric: took 1.460182924s to stop
	I1009 19:18:01.030333   34391 stop.go:39] StopHost: ha-199780-m03
	I1009 19:18:01.030624   34391 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:18:01.030663   34391 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:18:01.046000   34391 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45905
	I1009 19:18:01.046372   34391 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:18:01.046879   34391 main.go:141] libmachine: Using API Version  1
	I1009 19:18:01.046899   34391 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:18:01.047279   34391 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:18:01.049208   34391 out.go:177] * Stopping node "ha-199780-m03"  ...
	I1009 19:18:01.050146   34391 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1009 19:18:01.050167   34391 main.go:141] libmachine: (ha-199780-m03) Calling .DriverName
	I1009 19:18:01.050344   34391 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1009 19:18:01.050365   34391 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHHostname
	I1009 19:18:01.053266   34391 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:18:01.053701   34391 main.go:141] libmachine: (ha-199780-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:92:44", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:12:58 +0000 UTC Type:0 Mac:52:54:00:15:92:44 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:ha-199780-m03 Clientid:01:52:54:00:15:92:44}
	I1009 19:18:01.053739   34391 main.go:141] libmachine: (ha-199780-m03) DBG | domain ha-199780-m03 has defined IP address 192.168.39.84 and MAC address 52:54:00:15:92:44 in network mk-ha-199780
	I1009 19:18:01.053862   34391 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHPort
	I1009 19:18:01.054024   34391 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHKeyPath
	I1009 19:18:01.054156   34391 main.go:141] libmachine: (ha-199780-m03) Calling .GetSSHUsername
	I1009 19:18:01.054281   34391 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m03/id_rsa Username:docker}
	I1009 19:18:01.144547   34391 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1009 19:18:01.198499   34391 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1009 19:18:01.253409   34391 main.go:141] libmachine: Stopping "ha-199780-m03"...
	I1009 19:18:01.253434   34391 main.go:141] libmachine: (ha-199780-m03) Calling .GetState
	I1009 19:18:01.254914   34391 main.go:141] libmachine: (ha-199780-m03) Calling .Stop
	I1009 19:18:01.258380   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 0/120
	I1009 19:18:02.259646   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 1/120
	I1009 19:18:03.260961   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 2/120
	I1009 19:18:04.262319   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 3/120
	I1009 19:18:05.263715   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 4/120
	I1009 19:18:06.265830   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 5/120
	I1009 19:18:07.267327   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 6/120
	I1009 19:18:08.268532   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 7/120
	I1009 19:18:09.269941   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 8/120
	I1009 19:18:10.271453   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 9/120
	I1009 19:18:11.273158   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 10/120
	I1009 19:18:12.274573   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 11/120
	I1009 19:18:13.276217   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 12/120
	I1009 19:18:14.277434   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 13/120
	I1009 19:18:15.279002   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 14/120
	I1009 19:18:16.280672   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 15/120
	I1009 19:18:17.282121   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 16/120
	I1009 19:18:18.283330   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 17/120
	I1009 19:18:19.284940   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 18/120
	I1009 19:18:20.286458   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 19/120
	I1009 19:18:21.288104   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 20/120
	I1009 19:18:22.289603   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 21/120
	I1009 19:18:23.291168   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 22/120
	I1009 19:18:24.292631   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 23/120
	I1009 19:18:25.294310   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 24/120
	I1009 19:18:26.296863   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 25/120
	I1009 19:18:27.298305   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 26/120
	I1009 19:18:28.299692   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 27/120
	I1009 19:18:29.301171   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 28/120
	I1009 19:18:30.302547   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 29/120
	I1009 19:18:31.303933   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 30/120
	I1009 19:18:32.305295   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 31/120
	I1009 19:18:33.306721   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 32/120
	I1009 19:18:34.307874   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 33/120
	I1009 19:18:35.309329   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 34/120
	I1009 19:18:36.310822   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 35/120
	I1009 19:18:37.312199   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 36/120
	I1009 19:18:38.313278   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 37/120
	I1009 19:18:39.314574   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 38/120
	I1009 19:18:40.315687   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 39/120
	I1009 19:18:41.316852   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 40/120
	I1009 19:18:42.318072   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 41/120
	I1009 19:18:43.319190   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 42/120
	I1009 19:18:44.321397   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 43/120
	I1009 19:18:45.322623   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 44/120
	I1009 19:18:46.324309   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 45/120
	I1009 19:18:47.325829   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 46/120
	I1009 19:18:48.327295   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 47/120
	I1009 19:18:49.328690   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 48/120
	I1009 19:18:50.329964   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 49/120
	I1009 19:18:51.331694   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 50/120
	I1009 19:18:52.333707   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 51/120
	I1009 19:18:53.334759   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 52/120
	I1009 19:18:54.336128   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 53/120
	I1009 19:18:55.337167   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 54/120
	I1009 19:18:56.338968   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 55/120
	I1009 19:18:57.340156   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 56/120
	I1009 19:18:58.341306   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 57/120
	I1009 19:18:59.342432   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 58/120
	I1009 19:19:00.344256   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 59/120
	I1009 19:19:01.345899   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 60/120
	I1009 19:19:02.347102   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 61/120
	I1009 19:19:03.348318   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 62/120
	I1009 19:19:04.349557   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 63/120
	I1009 19:19:05.350738   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 64/120
	I1009 19:19:06.352359   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 65/120
	I1009 19:19:07.353783   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 66/120
	I1009 19:19:08.355079   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 67/120
	I1009 19:19:09.356474   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 68/120
	I1009 19:19:10.358163   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 69/120
	I1009 19:19:11.359523   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 70/120
	I1009 19:19:12.360818   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 71/120
	I1009 19:19:13.362255   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 72/120
	I1009 19:19:14.363505   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 73/120
	I1009 19:19:15.365094   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 74/120
	I1009 19:19:16.366660   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 75/120
	I1009 19:19:17.367996   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 76/120
	I1009 19:19:18.369444   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 77/120
	I1009 19:19:19.370914   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 78/120
	I1009 19:19:20.372165   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 79/120
	I1009 19:19:21.373964   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 80/120
	I1009 19:19:22.375181   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 81/120
	I1009 19:19:23.376428   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 82/120
	I1009 19:19:24.377831   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 83/120
	I1009 19:19:25.379196   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 84/120
	I1009 19:19:26.380798   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 85/120
	I1009 19:19:27.382174   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 86/120
	I1009 19:19:28.383516   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 87/120
	I1009 19:19:29.384911   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 88/120
	I1009 19:19:30.386476   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 89/120
	I1009 19:19:31.388162   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 90/120
	I1009 19:19:32.389480   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 91/120
	I1009 19:19:33.390899   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 92/120
	I1009 19:19:34.392313   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 93/120
	I1009 19:19:35.393690   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 94/120
	I1009 19:19:36.395231   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 95/120
	I1009 19:19:37.396649   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 96/120
	I1009 19:19:38.398300   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 97/120
	I1009 19:19:39.399662   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 98/120
	I1009 19:19:40.401040   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 99/120
	I1009 19:19:41.402783   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 100/120
	I1009 19:19:42.404213   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 101/120
	I1009 19:19:43.405415   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 102/120
	I1009 19:19:44.406900   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 103/120
	I1009 19:19:45.408225   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 104/120
	I1009 19:19:46.410087   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 105/120
	I1009 19:19:47.411687   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 106/120
	I1009 19:19:48.413192   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 107/120
	I1009 19:19:49.414426   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 108/120
	I1009 19:19:50.415828   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 109/120
	I1009 19:19:51.418373   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 110/120
	I1009 19:19:52.419659   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 111/120
	I1009 19:19:53.421428   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 112/120
	I1009 19:19:54.422688   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 113/120
	I1009 19:19:55.424903   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 114/120
	I1009 19:19:56.426508   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 115/120
	I1009 19:19:57.427976   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 116/120
	I1009 19:19:58.429292   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 117/120
	I1009 19:19:59.430670   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 118/120
	I1009 19:20:00.432745   34391 main.go:141] libmachine: (ha-199780-m03) Waiting for machine to stop 119/120
	I1009 19:20:01.433334   34391 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1009 19:20:01.433379   34391 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1009 19:20:01.435283   34391 out.go:201] 
	W1009 19:20:01.436746   34391 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1009 19:20:01.436767   34391 out.go:270] * 
	* 
	W1009 19:20:01.438914   34391 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:20:01.440241   34391 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-199780 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-199780 --wait=true -v=7 --alsologtostderr
E1009 19:20:19.318481   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-199780 --wait=true -v=7 --alsologtostderr: (4m35.506932399s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-199780
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-199780 -n ha-199780
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 logs -n 25: (2.112950581s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m04 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp testdata/cp-test.txt                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m04_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03:/home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m03 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-199780 node stop m02 -v=7                                                     | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-199780 node start m02 -v=7                                                    | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-199780 -v=7                                                           | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-199780 -v=7                                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-199780 --wait=true -v=7                                                    | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:20 UTC | 09 Oct 24 19:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-199780                                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:24 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:20:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:20:01.486023   34872 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:20:01.486117   34872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:20:01.486124   34872 out.go:358] Setting ErrFile to fd 2...
	I1009 19:20:01.486129   34872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:20:01.486334   34872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:20:01.486832   34872 out.go:352] Setting JSON to false
	I1009 19:20:01.487710   34872 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3742,"bootTime":1728497859,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:20:01.487798   34872 start.go:139] virtualization: kvm guest
	I1009 19:20:01.490024   34872 out.go:177] * [ha-199780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:20:01.491595   34872 notify.go:220] Checking for updates...
	I1009 19:20:01.491621   34872 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:20:01.492795   34872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:20:01.493998   34872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:20:01.495164   34872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:20:01.496347   34872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:20:01.497531   34872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:20:01.499104   34872 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:20:01.499189   34872 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:20:01.499628   34872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:20:01.499665   34872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:20:01.515577   34872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I1009 19:20:01.516073   34872 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:20:01.516632   34872 main.go:141] libmachine: Using API Version  1
	I1009 19:20:01.516650   34872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:20:01.516962   34872 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:20:01.517125   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:20:01.552200   34872 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 19:20:01.553450   34872 start.go:297] selected driver: kvm2
	I1009 19:20:01.553467   34872 start.go:901] validating driver "kvm2" against &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.124 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:20:01.553635   34872 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:20:01.554045   34872 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:20:01.554129   34872 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:20:01.568657   34872 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 19:20:01.569279   34872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:20:01.569314   34872 cni.go:84] Creating CNI manager for ""
	I1009 19:20:01.569371   34872 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:20:01.569424   34872 start.go:340] cluster config:
	{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.124 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:20:01.569531   34872 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:20:01.571488   34872 out.go:177] * Starting "ha-199780" primary control-plane node in "ha-199780" cluster
	I1009 19:20:01.572662   34872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:20:01.572691   34872 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:20:01.572697   34872 cache.go:56] Caching tarball of preloaded images
	I1009 19:20:01.572773   34872 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:20:01.572783   34872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:20:01.572879   34872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:20:01.573053   34872 start.go:360] acquireMachinesLock for ha-199780: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:20:01.573087   34872 start.go:364] duration metric: took 18.672µs to acquireMachinesLock for "ha-199780"
	I1009 19:20:01.573099   34872 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:20:01.573103   34872 fix.go:54] fixHost starting: 
	I1009 19:20:01.573370   34872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:20:01.573398   34872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:20:01.587934   34872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I1009 19:20:01.588409   34872 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:20:01.588961   34872 main.go:141] libmachine: Using API Version  1
	I1009 19:20:01.588991   34872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:20:01.589451   34872 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:20:01.589674   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:20:01.589864   34872 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:20:01.591307   34872 fix.go:112] recreateIfNeeded on ha-199780: state=Running err=<nil>
	W1009 19:20:01.591323   34872 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:20:01.593434   34872 out.go:177] * Updating the running kvm2 "ha-199780" VM ...
	I1009 19:20:01.594530   34872 machine.go:93] provisionDockerMachine start ...
	I1009 19:20:01.594552   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:20:01.594725   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:01.597340   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.597782   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.597809   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.597893   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:01.598029   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.598179   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.598304   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:01.598452   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:20:01.598666   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:20:01.598678   34872 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:20:01.704530   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780
	
	I1009 19:20:01.704559   34872 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:20:01.704772   34872 buildroot.go:166] provisioning hostname "ha-199780"
	I1009 19:20:01.704794   34872 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:20:01.704987   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:01.707879   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.708396   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.708426   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.708553   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:01.708724   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.708908   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.709051   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:01.709218   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:20:01.709406   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:20:01.709419   34872 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780 && echo "ha-199780" | sudo tee /etc/hostname
	I1009 19:20:01.836697   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780
	
	I1009 19:20:01.836729   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:01.839270   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.839647   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.839668   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.839883   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:01.840071   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.840228   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.840381   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:01.840547   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:20:01.840754   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:20:01.840779   34872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:20:01.948359   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:20:01.948390   34872 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:20:01.948427   34872 buildroot.go:174] setting up certificates
	I1009 19:20:01.948446   34872 provision.go:84] configureAuth start
	I1009 19:20:01.948465   34872 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:20:01.948733   34872 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:20:01.951415   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.951822   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.951853   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.952037   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:01.954141   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.954513   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.954537   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.954667   34872 provision.go:143] copyHostCerts
	I1009 19:20:01.954692   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:20:01.954740   34872 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:20:01.954750   34872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:20:01.954823   34872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:20:01.954923   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:20:01.954953   34872 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:20:01.954961   34872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:20:01.954989   34872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:20:01.955050   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:20:01.955093   34872 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:20:01.955104   34872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:20:01.955137   34872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:20:01.955225   34872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780 san=[127.0.0.1 192.168.39.114 ha-199780 localhost minikube]
	I1009 19:20:02.175616   34872 provision.go:177] copyRemoteCerts
	I1009 19:20:02.175674   34872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:20:02.175699   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:02.178473   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:02.178971   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:02.179001   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:02.179213   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:02.179399   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:02.179576   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:02.179712   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:20:02.262847   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:20:02.262911   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:20:02.292827   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:20:02.292918   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:20:02.325866   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:20:02.325943   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:20:02.360565   34872 provision.go:87] duration metric: took 412.102006ms to configureAuth
	I1009 19:20:02.360590   34872 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:20:02.360797   34872 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:20:02.360861   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:02.363580   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:02.363864   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:02.363889   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:02.364053   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:02.364261   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:02.364414   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:02.364578   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:02.364739   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:20:02.364932   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:20:02.364965   34872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:21:33.078148   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:21:33.078176   34872 machine.go:96] duration metric: took 1m31.483632414s to provisionDockerMachine
	I1009 19:21:33.078191   34872 start.go:293] postStartSetup for "ha-199780" (driver="kvm2")
	I1009 19:21:33.078204   34872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:21:33.078229   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.078938   34872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:21:33.079032   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.082788   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.083260   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.083291   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.083429   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.083608   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.083755   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.083882   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:21:33.167007   34872 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:21:33.171435   34872 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:21:33.171454   34872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:21:33.171509   34872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:21:33.171598   34872 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:21:33.171608   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:21:33.171687   34872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:21:33.180916   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:21:33.203698   34872 start.go:296] duration metric: took 125.496294ms for postStartSetup
	I1009 19:21:33.203740   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.204009   34872 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1009 19:21:33.204037   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.206668   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.207166   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.207193   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.207323   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.207489   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.207616   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.207751   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	W1009 19:21:33.290228   34872 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1009 19:21:33.290260   34872 fix.go:56] duration metric: took 1m31.717154952s for fixHost
	I1009 19:21:33.290284   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.292808   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.293144   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.293165   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.293296   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.293464   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.293592   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.293714   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.293847   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:33.294003   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:21:33.294013   34872 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:21:33.395911   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501693.361998970
	
	I1009 19:21:33.395936   34872 fix.go:216] guest clock: 1728501693.361998970
	I1009 19:21:33.395946   34872 fix.go:229] Guest: 2024-10-09 19:21:33.36199897 +0000 UTC Remote: 2024-10-09 19:21:33.290267589 +0000 UTC m=+91.840026157 (delta=71.731381ms)
	I1009 19:21:33.396000   34872 fix.go:200] guest clock delta is within tolerance: 71.731381ms
	I1009 19:21:33.396012   34872 start.go:83] releasing machines lock for "ha-199780", held for 1m31.822915264s
	I1009 19:21:33.396053   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.396308   34872 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:21:33.399089   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.399410   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.399431   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.399607   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.400128   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.400302   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.400413   34872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:21:33.400452   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.400497   34872 ssh_runner.go:195] Run: cat /version.json
	I1009 19:21:33.400521   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.402737   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.403103   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.403145   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.403161   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.403320   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.403473   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.403587   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.403605   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.403632   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.403752   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.403775   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:21:33.403866   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.403966   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.404070   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:21:33.480772   34872 ssh_runner.go:195] Run: systemctl --version
	I1009 19:21:33.511893   34872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:21:33.674580   34872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:21:33.680665   34872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:21:33.680725   34872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:21:33.691081   34872 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:21:33.691110   34872 start.go:495] detecting cgroup driver to use...
	I1009 19:21:33.691168   34872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:21:33.709437   34872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:21:33.724564   34872 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:21:33.724630   34872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:21:33.738493   34872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:21:33.751677   34872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:21:33.918855   34872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:21:34.074135   34872 docker.go:233] disabling docker service ...
	I1009 19:21:34.074214   34872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:21:34.094540   34872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:21:34.109085   34872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:21:34.265482   34872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:21:34.418044   34872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:21:34.432873   34872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:21:34.451397   34872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:21:34.451464   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.462054   34872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:21:34.462114   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.472486   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.482977   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.493759   34872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:21:34.504847   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.515054   34872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.525321   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.536611   34872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:21:34.545934   34872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:21:34.555435   34872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:21:34.701817   34872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:21:34.927116   34872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:21:34.927171   34872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:21:34.932105   34872 start.go:563] Will wait 60s for crictl version
	I1009 19:21:34.932151   34872 ssh_runner.go:195] Run: which crictl
	I1009 19:21:34.935915   34872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:21:34.977335   34872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:21:34.977408   34872 ssh_runner.go:195] Run: crio --version
	I1009 19:21:35.007603   34872 ssh_runner.go:195] Run: crio --version
	I1009 19:21:35.040086   34872 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:21:35.041599   34872 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:21:35.043869   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:35.044158   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:35.044175   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:35.044403   34872 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:21:35.049395   34872 kubeadm.go:883] updating cluster {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.124 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:21:35.049534   34872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:21:35.049583   34872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:21:35.095434   34872 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:21:35.095459   34872 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:21:35.095525   34872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:21:35.131879   34872 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:21:35.131905   34872 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:21:35.131913   34872 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.31.1 crio true true} ...
	I1009 19:21:35.132001   34872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:21:35.132064   34872 ssh_runner.go:195] Run: crio config
	I1009 19:21:35.194659   34872 cni.go:84] Creating CNI manager for ""
	I1009 19:21:35.194681   34872 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:21:35.194700   34872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:21:35.194725   34872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-199780 NodeName:ha-199780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:21:35.194871   34872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-199780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:21:35.194892   34872 kube-vip.go:115] generating kube-vip config ...
	I1009 19:21:35.194939   34872 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:21:35.206370   34872 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:21:35.206465   34872 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:21:35.206514   34872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:21:35.216308   34872 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:21:35.216370   34872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:21:35.226527   34872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1009 19:21:35.244874   34872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:21:35.261505   34872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1009 19:21:35.277735   34872 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:21:35.296066   34872 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:21:35.299671   34872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:21:35.446678   34872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:21:35.461048   34872 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.114
	I1009 19:21:35.461070   34872 certs.go:194] generating shared ca certs ...
	I1009 19:21:35.461089   34872 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:21:35.461259   34872 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:21:35.461321   34872 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:21:35.461334   34872 certs.go:256] generating profile certs ...
	I1009 19:21:35.461438   34872 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:21:35.461471   34872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.9477596b
	I1009 19:21:35.461492   34872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.9477596b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.84 192.168.39.254]
	I1009 19:21:35.723121   34872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.9477596b ...
	I1009 19:21:35.723155   34872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.9477596b: {Name:mkec5a15db62e4bd503add32e8b0badd37176000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:21:35.723368   34872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.9477596b ...
	I1009 19:21:35.723383   34872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.9477596b: {Name:mkd4d418f2477b1468659558e1bee00f2e470e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:21:35.723462   34872 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.9477596b -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:21:35.723659   34872 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.9477596b -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:21:35.723802   34872 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:21:35.723818   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:21:35.723831   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:21:35.723850   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:21:35.723868   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:21:35.723883   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:21:35.723904   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:21:35.723922   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:21:35.723938   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:21:35.723986   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:21:35.724019   34872 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:21:35.724030   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:21:35.724057   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:21:35.724083   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:21:35.724106   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:21:35.724150   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:21:35.724186   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:21:35.724202   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:21:35.724216   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:21:35.724742   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:21:35.749535   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:21:35.772479   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:21:35.796076   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:21:35.820736   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:21:35.845591   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:21:35.870215   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:21:35.894814   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:21:35.918448   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:21:35.942420   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:21:35.966397   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:21:35.990482   34872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:21:36.009614   34872 ssh_runner.go:195] Run: openssl version
	I1009 19:21:36.015899   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:21:36.026864   34872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:21:36.031697   34872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:21:36.031758   34872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:21:36.037543   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:21:36.046931   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:21:36.057735   34872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:21:36.062346   34872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:21:36.062407   34872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:21:36.067982   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:21:36.077258   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:21:36.087832   34872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:21:36.092406   34872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:21:36.092454   34872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:21:36.098067   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:21:36.107008   34872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:21:36.111676   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:21:36.117246   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:21:36.122678   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:21:36.128073   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:21:36.133681   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:21:36.139041   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:21:36.144450   34872 kubeadm.go:392] StartCluster: {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.124 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:21:36.144589   34872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:21:36.144634   34872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:21:36.183414   34872 cri.go:89] found id: "4223b2218971c6d66ef0a7dfe2c914fef7a9ddafb30414037017e92a46ccdd84"
	I1009 19:21:36.183439   34872 cri.go:89] found id: "8c9157cd4c492f349a9abd65a61f46fff16c3f1af243272be2ed00173d18f4db"
	I1009 19:21:36.183445   34872 cri.go:89] found id: "83e0cf511c7dc4662fbcf5f1480bdc6130672841db509ed662299100e83db677"
	I1009 19:21:36.183450   34872 cri.go:89] found id: "22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431"
	I1009 19:21:36.183454   34872 cri.go:89] found id: "35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72"
	I1009 19:21:36.183459   34872 cri.go:89] found id: "aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff"
	I1009 19:21:36.183463   34872 cri.go:89] found id: "e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46"
	I1009 19:21:36.183466   34872 cri.go:89] found id: "5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378"
	I1009 19:21:36.183470   34872 cri.go:89] found id: "297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d"
	I1009 19:21:36.183476   34872 cri.go:89] found id: "88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf"
	I1009 19:21:36.183478   34872 cri.go:89] found id: "ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef"
	I1009 19:21:36.183492   34872 cri.go:89] found id: "02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f"
	I1009 19:21:36.183504   34872 cri.go:89] found id: ""
	I1009 19:21:36.183545   34872 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-199780 -n ha-199780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-199780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (400.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 stop -v=7 --alsologtostderr
E1009 19:26:14.975148   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-199780 stop -v=7 --alsologtostderr: exit status 82 (2m0.4545216s)

                                                
                                                
-- stdout --
	* Stopping node "ha-199780-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:24:56.984336   36738 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:24:56.984448   36738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:24:56.984458   36738 out.go:358] Setting ErrFile to fd 2...
	I1009 19:24:56.984464   36738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:24:56.984627   36738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:24:56.984845   36738 out.go:352] Setting JSON to false
	I1009 19:24:56.984932   36738 mustload.go:65] Loading cluster: ha-199780
	I1009 19:24:56.985281   36738 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:24:56.985374   36738 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:24:56.985565   36738 mustload.go:65] Loading cluster: ha-199780
	I1009 19:24:56.985714   36738 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:24:56.985749   36738 stop.go:39] StopHost: ha-199780-m04
	I1009 19:24:56.986104   36738 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:24:56.986153   36738 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:24:57.001897   36738 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34435
	I1009 19:24:57.002336   36738 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:24:57.002934   36738 main.go:141] libmachine: Using API Version  1
	I1009 19:24:57.002962   36738 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:24:57.003332   36738 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:24:57.005917   36738 out.go:177] * Stopping node "ha-199780-m04"  ...
	I1009 19:24:57.007548   36738 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1009 19:24:57.007574   36738 main.go:141] libmachine: (ha-199780-m04) Calling .DriverName
	I1009 19:24:57.007781   36738 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1009 19:24:57.007811   36738 main.go:141] libmachine: (ha-199780-m04) Calling .GetSSHHostname
	I1009 19:24:57.010520   36738 main.go:141] libmachine: (ha-199780-m04) DBG | domain ha-199780-m04 has defined MAC address 52:54:00:56:11:1f in network mk-ha-199780
	I1009 19:24:57.010939   36738 main.go:141] libmachine: (ha-199780-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:56:11:1f", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:24:24 +0000 UTC Type:0 Mac:52:54:00:56:11:1f Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:ha-199780-m04 Clientid:01:52:54:00:56:11:1f}
	I1009 19:24:57.010973   36738 main.go:141] libmachine: (ha-199780-m04) DBG | domain ha-199780-m04 has defined IP address 192.168.39.124 and MAC address 52:54:00:56:11:1f in network mk-ha-199780
	I1009 19:24:57.011098   36738 main.go:141] libmachine: (ha-199780-m04) Calling .GetSSHPort
	I1009 19:24:57.011234   36738 main.go:141] libmachine: (ha-199780-m04) Calling .GetSSHKeyPath
	I1009 19:24:57.011397   36738 main.go:141] libmachine: (ha-199780-m04) Calling .GetSSHUsername
	I1009 19:24:57.011514   36738 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780-m04/id_rsa Username:docker}
	I1009 19:24:57.099072   36738 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1009 19:24:57.152967   36738 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1009 19:24:57.204964   36738 main.go:141] libmachine: Stopping "ha-199780-m04"...
	I1009 19:24:57.204994   36738 main.go:141] libmachine: (ha-199780-m04) Calling .GetState
	I1009 19:24:57.206538   36738 main.go:141] libmachine: (ha-199780-m04) Calling .Stop
	I1009 19:24:57.210054   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 0/120
	I1009 19:24:58.211416   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 1/120
	I1009 19:24:59.213400   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 2/120
	I1009 19:25:00.215111   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 3/120
	I1009 19:25:01.216267   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 4/120
	I1009 19:25:02.218056   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 5/120
	I1009 19:25:03.219509   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 6/120
	I1009 19:25:04.220689   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 7/120
	I1009 19:25:05.221806   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 8/120
	I1009 19:25:06.222988   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 9/120
	I1009 19:25:07.225152   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 10/120
	I1009 19:25:08.226440   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 11/120
	I1009 19:25:09.227667   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 12/120
	I1009 19:25:10.229037   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 13/120
	I1009 19:25:11.230259   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 14/120
	I1009 19:25:12.232012   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 15/120
	I1009 19:25:13.233295   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 16/120
	I1009 19:25:14.234437   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 17/120
	I1009 19:25:15.235662   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 18/120
	I1009 19:25:16.236866   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 19/120
	I1009 19:25:17.238155   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 20/120
	I1009 19:25:18.239464   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 21/120
	I1009 19:25:19.240812   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 22/120
	I1009 19:25:20.242277   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 23/120
	I1009 19:25:21.243665   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 24/120
	I1009 19:25:22.245635   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 25/120
	I1009 19:25:23.246995   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 26/120
	I1009 19:25:24.248439   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 27/120
	I1009 19:25:25.249828   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 28/120
	I1009 19:25:26.251127   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 29/120
	I1009 19:25:27.252999   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 30/120
	I1009 19:25:28.255091   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 31/120
	I1009 19:25:29.256471   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 32/120
	I1009 19:25:30.257671   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 33/120
	I1009 19:25:31.258942   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 34/120
	I1009 19:25:32.260806   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 35/120
	I1009 19:25:33.263014   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 36/120
	I1009 19:25:34.264416   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 37/120
	I1009 19:25:35.265567   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 38/120
	I1009 19:25:36.267220   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 39/120
	I1009 19:25:37.269299   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 40/120
	I1009 19:25:38.270657   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 41/120
	I1009 19:25:39.272235   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 42/120
	I1009 19:25:40.273524   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 43/120
	I1009 19:25:41.274826   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 44/120
	I1009 19:25:42.275970   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 45/120
	I1009 19:25:43.277398   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 46/120
	I1009 19:25:44.278609   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 47/120
	I1009 19:25:45.279932   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 48/120
	I1009 19:25:46.281191   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 49/120
	I1009 19:25:47.283234   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 50/120
	I1009 19:25:48.285382   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 51/120
	I1009 19:25:49.286649   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 52/120
	I1009 19:25:50.288086   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 53/120
	I1009 19:25:51.289434   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 54/120
	I1009 19:25:52.290953   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 55/120
	I1009 19:25:53.292376   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 56/120
	I1009 19:25:54.293488   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 57/120
	I1009 19:25:55.294727   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 58/120
	I1009 19:25:56.296117   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 59/120
	I1009 19:25:57.297917   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 60/120
	I1009 19:25:58.299247   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 61/120
	I1009 19:25:59.301476   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 62/120
	I1009 19:26:00.302907   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 63/120
	I1009 19:26:01.304367   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 64/120
	I1009 19:26:02.306076   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 65/120
	I1009 19:26:03.307270   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 66/120
	I1009 19:26:04.308578   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 67/120
	I1009 19:26:05.310058   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 68/120
	I1009 19:26:06.311269   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 69/120
	I1009 19:26:07.313354   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 70/120
	I1009 19:26:08.314722   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 71/120
	I1009 19:26:09.315997   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 72/120
	I1009 19:26:10.317468   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 73/120
	I1009 19:26:11.318636   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 74/120
	I1009 19:26:12.320520   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 75/120
	I1009 19:26:13.322065   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 76/120
	I1009 19:26:14.323247   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 77/120
	I1009 19:26:15.324582   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 78/120
	I1009 19:26:16.325815   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 79/120
	I1009 19:26:17.327775   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 80/120
	I1009 19:26:18.329628   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 81/120
	I1009 19:26:19.331209   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 82/120
	I1009 19:26:20.332570   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 83/120
	I1009 19:26:21.333896   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 84/120
	I1009 19:26:22.335941   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 85/120
	I1009 19:26:23.337485   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 86/120
	I1009 19:26:24.339155   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 87/120
	I1009 19:26:25.340265   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 88/120
	I1009 19:26:26.342033   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 89/120
	I1009 19:26:27.344090   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 90/120
	I1009 19:26:28.345378   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 91/120
	I1009 19:26:29.346591   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 92/120
	I1009 19:26:30.347880   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 93/120
	I1009 19:26:31.349454   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 94/120
	I1009 19:26:32.350810   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 95/120
	I1009 19:26:33.352639   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 96/120
	I1009 19:26:34.353789   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 97/120
	I1009 19:26:35.354964   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 98/120
	I1009 19:26:36.356239   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 99/120
	I1009 19:26:37.358101   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 100/120
	I1009 19:26:38.359753   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 101/120
	I1009 19:26:39.361324   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 102/120
	I1009 19:26:40.362599   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 103/120
	I1009 19:26:41.364057   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 104/120
	I1009 19:26:42.365323   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 105/120
	I1009 19:26:43.366764   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 106/120
	I1009 19:26:44.368052   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 107/120
	I1009 19:26:45.369344   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 108/120
	I1009 19:26:46.370619   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 109/120
	I1009 19:26:47.372526   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 110/120
	I1009 19:26:48.373987   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 111/120
	I1009 19:26:49.375486   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 112/120
	I1009 19:26:50.377766   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 113/120
	I1009 19:26:51.379128   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 114/120
	I1009 19:26:52.380939   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 115/120
	I1009 19:26:53.382321   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 116/120
	I1009 19:26:54.383773   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 117/120
	I1009 19:26:55.385422   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 118/120
	I1009 19:26:56.386868   36738 main.go:141] libmachine: (ha-199780-m04) Waiting for machine to stop 119/120
	I1009 19:26:57.387420   36738 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1009 19:26:57.387473   36738 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1009 19:26:57.389578   36738 out.go:201] 
	W1009 19:26:57.391040   36738 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1009 19:26:57.391070   36738 out.go:270] * 
	* 
	W1009 19:26:57.393199   36738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 19:26:57.394500   36738 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-199780 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr: (19.035182523s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-199780 -n ha-199780
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 logs -n 25: (1.955587986s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m04 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp testdata/cp-test.txt                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780:/home/docker/cp-test_ha-199780-m04_ha-199780.txt                       |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780 sudo cat                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780.txt                                 |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m02:/home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m02 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m03:/home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n                                                                 | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | ha-199780-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-199780 ssh -n ha-199780-m03 sudo cat                                          | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC | 09 Oct 24 19:15 UTC |
	|         | /home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-199780 node stop m02 -v=7                                                     | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-199780 node start m02 -v=7                                                    | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-199780 -v=7                                                           | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-199780 -v=7                                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-199780 --wait=true -v=7                                                    | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:20 UTC | 09 Oct 24 19:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-199780                                                                | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:24 UTC |                     |
	| node    | ha-199780 node delete m03 -v=7                                                   | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:24 UTC | 09 Oct 24 19:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-199780 stop -v=7                                                              | ha-199780 | jenkins | v1.34.0 | 09 Oct 24 19:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:20:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:20:01.486023   34872 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:20:01.486117   34872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:20:01.486124   34872 out.go:358] Setting ErrFile to fd 2...
	I1009 19:20:01.486129   34872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:20:01.486334   34872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:20:01.486832   34872 out.go:352] Setting JSON to false
	I1009 19:20:01.487710   34872 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3742,"bootTime":1728497859,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:20:01.487798   34872 start.go:139] virtualization: kvm guest
	I1009 19:20:01.490024   34872 out.go:177] * [ha-199780] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:20:01.491595   34872 notify.go:220] Checking for updates...
	I1009 19:20:01.491621   34872 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:20:01.492795   34872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:20:01.493998   34872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:20:01.495164   34872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:20:01.496347   34872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:20:01.497531   34872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:20:01.499104   34872 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:20:01.499189   34872 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:20:01.499628   34872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:20:01.499665   34872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:20:01.515577   34872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44711
	I1009 19:20:01.516073   34872 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:20:01.516632   34872 main.go:141] libmachine: Using API Version  1
	I1009 19:20:01.516650   34872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:20:01.516962   34872 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:20:01.517125   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:20:01.552200   34872 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 19:20:01.553450   34872 start.go:297] selected driver: kvm2
	I1009 19:20:01.553467   34872 start.go:901] validating driver "kvm2" against &{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.124 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:20:01.553635   34872 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:20:01.554045   34872 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:20:01.554129   34872 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:20:01.568657   34872 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 19:20:01.569279   34872 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:20:01.569314   34872 cni.go:84] Creating CNI manager for ""
	I1009 19:20:01.569371   34872 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:20:01.569424   34872 start.go:340] cluster config:
	{Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.124 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:20:01.569531   34872 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:20:01.571488   34872 out.go:177] * Starting "ha-199780" primary control-plane node in "ha-199780" cluster
	I1009 19:20:01.572662   34872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:20:01.572691   34872 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:20:01.572697   34872 cache.go:56] Caching tarball of preloaded images
	I1009 19:20:01.572773   34872 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:20:01.572783   34872 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:20:01.572879   34872 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/config.json ...
	I1009 19:20:01.573053   34872 start.go:360] acquireMachinesLock for ha-199780: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:20:01.573087   34872 start.go:364] duration metric: took 18.672µs to acquireMachinesLock for "ha-199780"
	I1009 19:20:01.573099   34872 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:20:01.573103   34872 fix.go:54] fixHost starting: 
	I1009 19:20:01.573370   34872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:20:01.573398   34872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:20:01.587934   34872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43487
	I1009 19:20:01.588409   34872 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:20:01.588961   34872 main.go:141] libmachine: Using API Version  1
	I1009 19:20:01.588991   34872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:20:01.589451   34872 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:20:01.589674   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:20:01.589864   34872 main.go:141] libmachine: (ha-199780) Calling .GetState
	I1009 19:20:01.591307   34872 fix.go:112] recreateIfNeeded on ha-199780: state=Running err=<nil>
	W1009 19:20:01.591323   34872 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:20:01.593434   34872 out.go:177] * Updating the running kvm2 "ha-199780" VM ...
	I1009 19:20:01.594530   34872 machine.go:93] provisionDockerMachine start ...
	I1009 19:20:01.594552   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:20:01.594725   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:01.597340   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.597782   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.597809   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.597893   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:01.598029   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.598179   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.598304   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:01.598452   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:20:01.598666   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:20:01.598678   34872 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:20:01.704530   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780
	
	I1009 19:20:01.704559   34872 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:20:01.704772   34872 buildroot.go:166] provisioning hostname "ha-199780"
	I1009 19:20:01.704794   34872 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:20:01.704987   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:01.707879   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.708396   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.708426   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.708553   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:01.708724   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.708908   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.709051   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:01.709218   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:20:01.709406   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:20:01.709419   34872 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-199780 && echo "ha-199780" | sudo tee /etc/hostname
	I1009 19:20:01.836697   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-199780
	
	I1009 19:20:01.836729   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:01.839270   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.839647   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.839668   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.839883   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:01.840071   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.840228   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:01.840381   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:01.840547   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:20:01.840754   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:20:01.840779   34872 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-199780' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-199780/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-199780' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:20:01.948359   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:20:01.948390   34872 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:20:01.948427   34872 buildroot.go:174] setting up certificates
	I1009 19:20:01.948446   34872 provision.go:84] configureAuth start
	I1009 19:20:01.948465   34872 main.go:141] libmachine: (ha-199780) Calling .GetMachineName
	I1009 19:20:01.948733   34872 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:20:01.951415   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.951822   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.951853   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.952037   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:01.954141   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.954513   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:01.954537   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:01.954667   34872 provision.go:143] copyHostCerts
	I1009 19:20:01.954692   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:20:01.954740   34872 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:20:01.954750   34872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:20:01.954823   34872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:20:01.954923   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:20:01.954953   34872 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:20:01.954961   34872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:20:01.954989   34872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:20:01.955050   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:20:01.955093   34872 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:20:01.955104   34872 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:20:01.955137   34872 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:20:01.955225   34872 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.ha-199780 san=[127.0.0.1 192.168.39.114 ha-199780 localhost minikube]
	I1009 19:20:02.175616   34872 provision.go:177] copyRemoteCerts
	I1009 19:20:02.175674   34872 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:20:02.175699   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:02.178473   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:02.178971   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:02.179001   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:02.179213   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:02.179399   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:02.179576   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:02.179712   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:20:02.262847   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:20:02.262911   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:20:02.292827   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:20:02.292918   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1009 19:20:02.325866   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:20:02.325943   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 19:20:02.360565   34872 provision.go:87] duration metric: took 412.102006ms to configureAuth
	I1009 19:20:02.360590   34872 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:20:02.360797   34872 config.go:182] Loaded profile config "ha-199780": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:20:02.360861   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:20:02.363580   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:02.363864   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:20:02.363889   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:20:02.364053   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:20:02.364261   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:02.364414   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:20:02.364578   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:20:02.364739   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:20:02.364932   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:20:02.364965   34872 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:21:33.078148   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:21:33.078176   34872 machine.go:96] duration metric: took 1m31.483632414s to provisionDockerMachine
	I1009 19:21:33.078191   34872 start.go:293] postStartSetup for "ha-199780" (driver="kvm2")
	I1009 19:21:33.078204   34872 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:21:33.078229   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.078938   34872 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:21:33.079032   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.082788   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.083260   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.083291   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.083429   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.083608   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.083755   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.083882   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:21:33.167007   34872 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:21:33.171435   34872 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:21:33.171454   34872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:21:33.171509   34872 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:21:33.171598   34872 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:21:33.171608   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:21:33.171687   34872 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:21:33.180916   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:21:33.203698   34872 start.go:296] duration metric: took 125.496294ms for postStartSetup
	I1009 19:21:33.203740   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.204009   34872 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1009 19:21:33.204037   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.206668   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.207166   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.207193   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.207323   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.207489   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.207616   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.207751   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	W1009 19:21:33.290228   34872 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1009 19:21:33.290260   34872 fix.go:56] duration metric: took 1m31.717154952s for fixHost
	I1009 19:21:33.290284   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.292808   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.293144   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.293165   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.293296   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.293464   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.293592   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.293714   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.293847   34872 main.go:141] libmachine: Using SSH client type: native
	I1009 19:21:33.294003   34872 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.114 22 <nil> <nil>}
	I1009 19:21:33.294013   34872 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:21:33.395911   34872 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728501693.361998970
	
	I1009 19:21:33.395936   34872 fix.go:216] guest clock: 1728501693.361998970
	I1009 19:21:33.395946   34872 fix.go:229] Guest: 2024-10-09 19:21:33.36199897 +0000 UTC Remote: 2024-10-09 19:21:33.290267589 +0000 UTC m=+91.840026157 (delta=71.731381ms)
	I1009 19:21:33.396000   34872 fix.go:200] guest clock delta is within tolerance: 71.731381ms
	I1009 19:21:33.396012   34872 start.go:83] releasing machines lock for "ha-199780", held for 1m31.822915264s
	I1009 19:21:33.396053   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.396308   34872 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:21:33.399089   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.399410   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.399431   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.399607   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.400128   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.400302   34872 main.go:141] libmachine: (ha-199780) Calling .DriverName
	I1009 19:21:33.400413   34872 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:21:33.400452   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.400497   34872 ssh_runner.go:195] Run: cat /version.json
	I1009 19:21:33.400521   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHHostname
	I1009 19:21:33.402737   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.403103   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.403145   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.403161   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.403320   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.403473   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.403587   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:33.403605   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:33.403632   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.403752   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHPort
	I1009 19:21:33.403775   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:21:33.403866   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHKeyPath
	I1009 19:21:33.403966   34872 main.go:141] libmachine: (ha-199780) Calling .GetSSHUsername
	I1009 19:21:33.404070   34872 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/ha-199780/id_rsa Username:docker}
	I1009 19:21:33.480772   34872 ssh_runner.go:195] Run: systemctl --version
	I1009 19:21:33.511893   34872 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:21:33.674580   34872 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 19:21:33.680665   34872 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:21:33.680725   34872 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:21:33.691081   34872 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:21:33.691110   34872 start.go:495] detecting cgroup driver to use...
	I1009 19:21:33.691168   34872 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:21:33.709437   34872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:21:33.724564   34872 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:21:33.724630   34872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:21:33.738493   34872 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:21:33.751677   34872 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:21:33.918855   34872 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:21:34.074135   34872 docker.go:233] disabling docker service ...
	I1009 19:21:34.074214   34872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:21:34.094540   34872 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:21:34.109085   34872 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:21:34.265482   34872 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:21:34.418044   34872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:21:34.432873   34872 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:21:34.451397   34872 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:21:34.451464   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.462054   34872 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:21:34.462114   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.472486   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.482977   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.493759   34872 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:21:34.504847   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.515054   34872 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.525321   34872 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:21:34.536611   34872 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:21:34.545934   34872 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:21:34.555435   34872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:21:34.701817   34872 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:21:34.927116   34872 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:21:34.927171   34872 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:21:34.932105   34872 start.go:563] Will wait 60s for crictl version
	I1009 19:21:34.932151   34872 ssh_runner.go:195] Run: which crictl
	I1009 19:21:34.935915   34872 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:21:34.977335   34872 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:21:34.977408   34872 ssh_runner.go:195] Run: crio --version
	I1009 19:21:35.007603   34872 ssh_runner.go:195] Run: crio --version
	I1009 19:21:35.040086   34872 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:21:35.041599   34872 main.go:141] libmachine: (ha-199780) Calling .GetIP
	I1009 19:21:35.043869   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:35.044158   34872 main.go:141] libmachine: (ha-199780) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:16:82", ip: ""} in network mk-ha-199780: {Iface:virbr1 ExpiryTime:2024-10-09 20:10:57 +0000 UTC Type:0 Mac:52:54:00:5a:16:82 Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:ha-199780 Clientid:01:52:54:00:5a:16:82}
	I1009 19:21:35.044175   34872 main.go:141] libmachine: (ha-199780) DBG | domain ha-199780 has defined IP address 192.168.39.114 and MAC address 52:54:00:5a:16:82 in network mk-ha-199780
	I1009 19:21:35.044403   34872 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:21:35.049395   34872 kubeadm.go:883] updating cluster {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.124 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:21:35.049534   34872 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:21:35.049583   34872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:21:35.095434   34872 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:21:35.095459   34872 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:21:35.095525   34872 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:21:35.131879   34872 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:21:35.131905   34872 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:21:35.131913   34872 kubeadm.go:934] updating node { 192.168.39.114 8443 v1.31.1 crio true true} ...
	I1009 19:21:35.132001   34872 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-199780 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.114
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:21:35.132064   34872 ssh_runner.go:195] Run: crio config
	I1009 19:21:35.194659   34872 cni.go:84] Creating CNI manager for ""
	I1009 19:21:35.194681   34872 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1009 19:21:35.194700   34872 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:21:35.194725   34872 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.114 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-199780 NodeName:ha-199780 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.114"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.114 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:21:35.194871   34872 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.114
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-199780"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.114
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.114"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:21:35.194892   34872 kube-vip.go:115] generating kube-vip config ...
	I1009 19:21:35.194939   34872 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1009 19:21:35.206370   34872 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1009 19:21:35.206465   34872 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1009 19:21:35.206514   34872 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:21:35.216308   34872 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:21:35.216370   34872 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1009 19:21:35.226527   34872 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1009 19:21:35.244874   34872 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:21:35.261505   34872 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I1009 19:21:35.277735   34872 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1009 19:21:35.296066   34872 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1009 19:21:35.299671   34872 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:21:35.446678   34872 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:21:35.461048   34872 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780 for IP: 192.168.39.114
	I1009 19:21:35.461070   34872 certs.go:194] generating shared ca certs ...
	I1009 19:21:35.461089   34872 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:21:35.461259   34872 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:21:35.461321   34872 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:21:35.461334   34872 certs.go:256] generating profile certs ...
	I1009 19:21:35.461438   34872 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/client.key
	I1009 19:21:35.461471   34872 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.9477596b
	I1009 19:21:35.461492   34872 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.9477596b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.114 192.168.39.83 192.168.39.84 192.168.39.254]
	I1009 19:21:35.723121   34872 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.9477596b ...
	I1009 19:21:35.723155   34872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.9477596b: {Name:mkec5a15db62e4bd503add32e8b0badd37176000 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:21:35.723368   34872 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.9477596b ...
	I1009 19:21:35.723383   34872 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.9477596b: {Name:mkd4d418f2477b1468659558e1bee00f2e470e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:21:35.723462   34872 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt.9477596b -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt
	I1009 19:21:35.723659   34872 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key.9477596b -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key
	I1009 19:21:35.723802   34872 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key
	I1009 19:21:35.723818   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:21:35.723831   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:21:35.723850   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:21:35.723868   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:21:35.723883   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:21:35.723904   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:21:35.723922   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:21:35.723938   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:21:35.723986   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:21:35.724019   34872 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:21:35.724030   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:21:35.724057   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:21:35.724083   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:21:35.724106   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:21:35.724150   34872 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:21:35.724186   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:21:35.724202   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:21:35.724216   34872 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:21:35.724742   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:21:35.749535   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:21:35.772479   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:21:35.796076   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:21:35.820736   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 19:21:35.845591   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 19:21:35.870215   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:21:35.894814   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/ha-199780/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 19:21:35.918448   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:21:35.942420   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:21:35.966397   34872 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:21:35.990482   34872 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:21:36.009614   34872 ssh_runner.go:195] Run: openssl version
	I1009 19:21:36.015899   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:21:36.026864   34872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:21:36.031697   34872 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:21:36.031758   34872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:21:36.037543   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:21:36.046931   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:21:36.057735   34872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:21:36.062346   34872 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:21:36.062407   34872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:21:36.067982   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:21:36.077258   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:21:36.087832   34872 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:21:36.092406   34872 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:21:36.092454   34872 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:21:36.098067   34872 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:21:36.107008   34872 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:21:36.111676   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:21:36.117246   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:21:36.122678   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:21:36.128073   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:21:36.133681   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:21:36.139041   34872 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:21:36.144450   34872 kubeadm.go:392] StartCluster: {Name:ha-199780 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-199780 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.114 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.83 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.84 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.124 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:21:36.144589   34872 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:21:36.144634   34872 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:21:36.183414   34872 cri.go:89] found id: "4223b2218971c6d66ef0a7dfe2c914fef7a9ddafb30414037017e92a46ccdd84"
	I1009 19:21:36.183439   34872 cri.go:89] found id: "8c9157cd4c492f349a9abd65a61f46fff16c3f1af243272be2ed00173d18f4db"
	I1009 19:21:36.183445   34872 cri.go:89] found id: "83e0cf511c7dc4662fbcf5f1480bdc6130672841db509ed662299100e83db677"
	I1009 19:21:36.183450   34872 cri.go:89] found id: "22a50af75d0920e41f6485e69ac03141da00aa9f313bb2815346263dcbf49431"
	I1009 19:21:36.183454   34872 cri.go:89] found id: "35a77197ba8334a2dae05cd0bb3e07f535fdae063280863a8d273c625816be72"
	I1009 19:21:36.183459   34872 cri.go:89] found id: "aa6f941b511eef3a4c2c974c8cca469ef19be3ee75d1122209e507c5dd54faff"
	I1009 19:21:36.183463   34872 cri.go:89] found id: "e72e7a03ebf127b8dc1c117f57710e43c3399b8dd90a0f32f7fe8f5497194d46"
	I1009 19:21:36.183466   34872 cri.go:89] found id: "5e66ef287f9b98b041e5c20f9ed9fd0409987ed32d6ea0be27ec9c5ad0cf6378"
	I1009 19:21:36.183470   34872 cri.go:89] found id: "297d9ba8730bd2a76417f09364ef2f623cf3c31ff77de8b5872531ca51a9ab6d"
	I1009 19:21:36.183476   34872 cri.go:89] found id: "88b0c3165117790e6e77b3d7bfc4fd1582bc365ec6a4175c83ecf0572e012eaf"
	I1009 19:21:36.183478   34872 cri.go:89] found id: "ce5525ec371c774cbf47409f03bc2d10b39cd5b333faec3542ab03d7f5f876ef"
	I1009 19:21:36.183492   34872 cri.go:89] found id: "02b6fe12544b4e250af332efa0a6643a8885787e09b3370747b18c57e1d5fb2f"
	I1009 19:21:36.183504   34872 cri.go:89] found id: ""
	I1009 19:21:36.183545   34872 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-199780 -n ha-199780
helpers_test.go:261: (dbg) Run:  kubectl --context ha-199780 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-707643
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-707643
E1009 19:42:54.979299   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-707643: exit status 82 (2m1.840682248s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-707643-m03"  ...
	* Stopping node "multinode-707643-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-707643" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-707643 --wait=true -v=8 --alsologtostderr
E1009 19:44:51.614376   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:44:51.909174   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:47:54.682076   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-707643 --wait=true -v=8 --alsologtostderr: (3m22.697115758s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-707643
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-707643 -n multinode-707643
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-707643 logs -n 25: (2.062894885s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:41 UTC | 09 Oct 24 19:41 UTC |
	|         | multinode-707643-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m02:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:41 UTC | 09 Oct 24 19:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3630358187/001/cp-test_multinode-707643-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:41 UTC | 09 Oct 24 19:41 UTC |
	|         | multinode-707643-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m02:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:41 UTC | 09 Oct 24 19:41 UTC |
	|         | multinode-707643:/home/docker/cp-test_multinode-707643-m02_multinode-707643.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:41 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n multinode-707643 sudo cat                                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /home/docker/cp-test_multinode-707643-m02_multinode-707643.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m02:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03:/home/docker/cp-test_multinode-707643-m02_multinode-707643-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n multinode-707643-m03 sudo cat                                   | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /home/docker/cp-test_multinode-707643-m02_multinode-707643-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp testdata/cp-test.txt                                                | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3630358187/001/cp-test_multinode-707643-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643:/home/docker/cp-test_multinode-707643-m03_multinode-707643.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n multinode-707643 sudo cat                                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /home/docker/cp-test_multinode-707643-m03_multinode-707643.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m02:/home/docker/cp-test_multinode-707643-m03_multinode-707643-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n multinode-707643-m02 sudo cat                                   | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /home/docker/cp-test_multinode-707643-m03_multinode-707643-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-707643 node stop m03                                                          | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	| node    | multinode-707643 node start                                                             | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-707643                                                                | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC |                     |
	| stop    | -p multinode-707643                                                                     | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC |                     |
	| start   | -p multinode-707643                                                                     | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:44 UTC | 09 Oct 24 19:48 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-707643                                                                | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:48 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:44:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:44:47.927416   46924 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:44:47.927551   46924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:47.927561   46924 out.go:358] Setting ErrFile to fd 2...
	I1009 19:44:47.927567   46924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:47.927727   46924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:44:47.928249   46924 out.go:352] Setting JSON to false
	I1009 19:44:47.929078   46924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5229,"bootTime":1728497859,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:44:47.929166   46924 start.go:139] virtualization: kvm guest
	I1009 19:44:47.931571   46924 out.go:177] * [multinode-707643] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:44:47.932944   46924 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:44:47.932947   46924 notify.go:220] Checking for updates...
	I1009 19:44:47.935531   46924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:44:47.936771   46924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:44:47.938037   46924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:44:47.939072   46924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:44:47.940170   46924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:44:47.941554   46924 config.go:182] Loaded profile config "multinode-707643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:44:47.941646   46924 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:44:47.942059   46924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:44:47.942106   46924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:44:47.956793   46924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I1009 19:44:47.957165   46924 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:44:47.957653   46924 main.go:141] libmachine: Using API Version  1
	I1009 19:44:47.957673   46924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:44:47.958029   46924 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:44:47.958241   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:44:47.993605   46924 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 19:44:47.994761   46924 start.go:297] selected driver: kvm2
	I1009 19:44:47.994772   46924 start.go:901] validating driver "kvm2" against &{Name:multinode-707643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-707643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.236 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:47.994899   46924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:44:47.995213   46924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:47.995278   46924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:44:48.009668   46924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 19:44:48.010381   46924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:44:48.010422   46924 cni.go:84] Creating CNI manager for ""
	I1009 19:44:48.010489   46924 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1009 19:44:48.010570   46924 start.go:340] cluster config:
	{Name:multinode-707643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-707643 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.236 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:48.010711   46924 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:48.012554   46924 out.go:177] * Starting "multinode-707643" primary control-plane node in "multinode-707643" cluster
	I1009 19:44:48.013726   46924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:44:48.013760   46924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:44:48.013768   46924 cache.go:56] Caching tarball of preloaded images
	I1009 19:44:48.013842   46924 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:44:48.013853   46924 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:44:48.013946   46924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/config.json ...
	I1009 19:44:48.014123   46924 start.go:360] acquireMachinesLock for multinode-707643: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:44:48.014173   46924 start.go:364] duration metric: took 35.182µs to acquireMachinesLock for "multinode-707643"
	I1009 19:44:48.014186   46924 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:44:48.014192   46924 fix.go:54] fixHost starting: 
	I1009 19:44:48.014427   46924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:44:48.014456   46924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:44:48.027924   46924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I1009 19:44:48.028369   46924 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:44:48.028858   46924 main.go:141] libmachine: Using API Version  1
	I1009 19:44:48.028875   46924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:44:48.029208   46924 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:44:48.029436   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:44:48.029649   46924 main.go:141] libmachine: (multinode-707643) Calling .GetState
	I1009 19:44:48.031175   46924 fix.go:112] recreateIfNeeded on multinode-707643: state=Running err=<nil>
	W1009 19:44:48.031221   46924 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:44:48.033074   46924 out.go:177] * Updating the running kvm2 "multinode-707643" VM ...
	I1009 19:44:48.034292   46924 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:48.034311   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:44:48.034481   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.036877   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.037299   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.037324   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.037452   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.037634   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.037760   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.037906   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.038047   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:48.038264   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:44:48.038282   46924 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:48.144056   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-707643
	
	I1009 19:44:48.144079   46924 main.go:141] libmachine: (multinode-707643) Calling .GetMachineName
	I1009 19:44:48.144288   46924 buildroot.go:166] provisioning hostname "multinode-707643"
	I1009 19:44:48.144307   46924 main.go:141] libmachine: (multinode-707643) Calling .GetMachineName
	I1009 19:44:48.144495   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.146973   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.147401   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.147425   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.147567   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.147715   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.147867   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.147960   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.148107   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:48.148280   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:44:48.148297   46924 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-707643 && echo "multinode-707643" | sudo tee /etc/hostname
	I1009 19:44:48.270817   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-707643
	
	I1009 19:44:48.270847   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.273513   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.273889   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.273918   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.274041   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.274234   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.274394   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.274525   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.274692   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:48.274957   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:44:48.274984   46924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-707643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-707643/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-707643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:48.380523   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:48.380554   46924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:44:48.380576   46924 buildroot.go:174] setting up certificates
	I1009 19:44:48.380591   46924 provision.go:84] configureAuth start
	I1009 19:44:48.380605   46924 main.go:141] libmachine: (multinode-707643) Calling .GetMachineName
	I1009 19:44:48.380853   46924 main.go:141] libmachine: (multinode-707643) Calling .GetIP
	I1009 19:44:48.383484   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.383808   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.383848   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.384009   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.386047   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.386379   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.386413   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.386536   46924 provision.go:143] copyHostCerts
	I1009 19:44:48.386564   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:44:48.386605   46924 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:44:48.386613   46924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:44:48.386676   46924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:44:48.386771   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:44:48.386789   46924 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:44:48.386795   46924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:44:48.386820   46924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:44:48.386874   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:44:48.386892   46924 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:44:48.386903   46924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:44:48.386927   46924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:44:48.386972   46924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.multinode-707643 san=[127.0.0.1 192.168.39.10 localhost minikube multinode-707643]
	I1009 19:44:48.527341   46924 provision.go:177] copyRemoteCerts
	I1009 19:44:48.527401   46924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:48.527427   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.530247   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.530577   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.530601   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.530816   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.530981   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.531137   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.531260   46924 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:44:48.614173   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:44:48.614251   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:44:48.639653   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:44:48.639725   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:44:48.664955   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:44:48.665015   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1009 19:44:48.689412   46924 provision.go:87] duration metric: took 308.809441ms to configureAuth
	I1009 19:44:48.689438   46924 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:44:48.689712   46924 config.go:182] Loaded profile config "multinode-707643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:44:48.689799   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.692823   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.693139   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.693165   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.693342   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.693504   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.693638   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.693769   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.693966   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:48.694119   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:44:48.694136   46924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:46:19.527450   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:46:19.527478   46924 machine.go:96] duration metric: took 1m31.493172909s to provisionDockerMachine
	I1009 19:46:19.527492   46924 start.go:293] postStartSetup for "multinode-707643" (driver="kvm2")
	I1009 19:46:19.527507   46924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:46:19.527542   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.527821   46924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:46:19.527851   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:46:19.530839   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.531199   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.531224   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.531335   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:46:19.531474   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.531580   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:46:19.531698   46924 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:46:19.614558   46924 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:46:19.618701   46924 command_runner.go:130] > NAME=Buildroot
	I1009 19:46:19.618719   46924 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1009 19:46:19.618725   46924 command_runner.go:130] > ID=buildroot
	I1009 19:46:19.618732   46924 command_runner.go:130] > VERSION_ID=2023.02.9
	I1009 19:46:19.618741   46924 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1009 19:46:19.618785   46924 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:46:19.618806   46924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:46:19.618866   46924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:46:19.618932   46924 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:46:19.618943   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:46:19.619051   46924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:46:19.628332   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:46:19.651875   46924 start.go:296] duration metric: took 124.371408ms for postStartSetup
	I1009 19:46:19.651909   46924 fix.go:56] duration metric: took 1m31.637715054s for fixHost
	I1009 19:46:19.651931   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:46:19.654439   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.654795   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.654818   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.654996   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:46:19.655173   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.655274   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.655353   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:46:19.655490   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:46:19.655657   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:46:19.655668   46924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:46:19.759660   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728503179.735466149
	
	I1009 19:46:19.759680   46924 fix.go:216] guest clock: 1728503179.735466149
	I1009 19:46:19.759687   46924 fix.go:229] Guest: 2024-10-09 19:46:19.735466149 +0000 UTC Remote: 2024-10-09 19:46:19.651914828 +0000 UTC m=+91.759678640 (delta=83.551321ms)
	I1009 19:46:19.759704   46924 fix.go:200] guest clock delta is within tolerance: 83.551321ms
	I1009 19:46:19.759708   46924 start.go:83] releasing machines lock for "multinode-707643", held for 1m31.745527134s
	I1009 19:46:19.759725   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.759930   46924 main.go:141] libmachine: (multinode-707643) Calling .GetIP
	I1009 19:46:19.762045   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.762385   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.762411   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.762518   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.762962   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.763124   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.763272   46924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:46:19.763316   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:46:19.763375   46924 ssh_runner.go:195] Run: cat /version.json
	I1009 19:46:19.763401   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:46:19.765873   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.766043   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.766253   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.766276   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.766424   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:46:19.766490   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.766519   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.766573   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.766687   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:46:19.766709   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:46:19.766835   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.766870   46924 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:46:19.766999   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:46:19.767148   46924 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:46:19.852744   46924 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1009 19:46:19.852860   46924 ssh_runner.go:195] Run: systemctl --version
	I1009 19:46:19.876154   46924 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 19:46:19.876816   46924 command_runner.go:130] > systemd 252 (252)
	I1009 19:46:19.876848   46924 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1009 19:46:19.876903   46924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:46:20.050072   46924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:46:20.061630   46924 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 19:46:20.061853   46924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:46:20.061930   46924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:46:20.071814   46924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:46:20.071832   46924 start.go:495] detecting cgroup driver to use...
	I1009 19:46:20.071881   46924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:46:20.089797   46924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:46:20.104434   46924 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:46:20.104517   46924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:46:20.119050   46924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:46:20.133133   46924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:46:20.285437   46924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:46:20.427386   46924 docker.go:233] disabling docker service ...
	I1009 19:46:20.427475   46924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:46:20.443087   46924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:46:20.456166   46924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:46:20.589679   46924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:46:20.730762   46924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:46:20.744643   46924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:46:20.763752   46924 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 19:46:20.763791   46924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:46:20.763840   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.774225   46924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:46:20.774272   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.784385   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.794215   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.804014   46924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:46:20.814229   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.824143   46924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.834740   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.844320   46924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:46:20.852963   46924 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 19:46:20.853005   46924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:46:20.861573   46924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:46:21.000733   46924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:46:21.196252   46924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:46:21.196342   46924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:46:21.201259   46924 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 19:46:21.201283   46924 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 19:46:21.201293   46924 command_runner.go:130] > Device: 0,22	Inode: 1291        Links: 1
	I1009 19:46:21.201302   46924 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:46:21.201316   46924 command_runner.go:130] > Access: 2024-10-09 19:46:21.069430899 +0000
	I1009 19:46:21.201328   46924 command_runner.go:130] > Modify: 2024-10-09 19:46:21.069430899 +0000
	I1009 19:46:21.201339   46924 command_runner.go:130] > Change: 2024-10-09 19:46:21.069430899 +0000
	I1009 19:46:21.201348   46924 command_runner.go:130] >  Birth: -
	I1009 19:46:21.201368   46924 start.go:563] Will wait 60s for crictl version
	I1009 19:46:21.201414   46924 ssh_runner.go:195] Run: which crictl
	I1009 19:46:21.204895   46924 command_runner.go:130] > /usr/bin/crictl
	I1009 19:46:21.205033   46924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:46:21.246638   46924 command_runner.go:130] > Version:  0.1.0
	I1009 19:46:21.246662   46924 command_runner.go:130] > RuntimeName:  cri-o
	I1009 19:46:21.246669   46924 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1009 19:46:21.246676   46924 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 19:46:21.246692   46924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:46:21.246750   46924 ssh_runner.go:195] Run: crio --version
	I1009 19:46:21.274766   46924 command_runner.go:130] > crio version 1.29.1
	I1009 19:46:21.274793   46924 command_runner.go:130] > Version:        1.29.1
	I1009 19:46:21.274799   46924 command_runner.go:130] > GitCommit:      unknown
	I1009 19:46:21.274803   46924 command_runner.go:130] > GitCommitDate:  unknown
	I1009 19:46:21.274807   46924 command_runner.go:130] > GitTreeState:   clean
	I1009 19:46:21.274812   46924 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1009 19:46:21.274817   46924 command_runner.go:130] > GoVersion:      go1.21.6
	I1009 19:46:21.274821   46924 command_runner.go:130] > Compiler:       gc
	I1009 19:46:21.274825   46924 command_runner.go:130] > Platform:       linux/amd64
	I1009 19:46:21.274829   46924 command_runner.go:130] > Linkmode:       dynamic
	I1009 19:46:21.274850   46924 command_runner.go:130] > BuildTags:      
	I1009 19:46:21.274854   46924 command_runner.go:130] >   containers_image_ostree_stub
	I1009 19:46:21.274858   46924 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1009 19:46:21.274863   46924 command_runner.go:130] >   btrfs_noversion
	I1009 19:46:21.274866   46924 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1009 19:46:21.274874   46924 command_runner.go:130] >   libdm_no_deferred_remove
	I1009 19:46:21.274877   46924 command_runner.go:130] >   seccomp
	I1009 19:46:21.274881   46924 command_runner.go:130] > LDFlags:          unknown
	I1009 19:46:21.274889   46924 command_runner.go:130] > SeccompEnabled:   true
	I1009 19:46:21.274898   46924 command_runner.go:130] > AppArmorEnabled:  false
	I1009 19:46:21.276026   46924 ssh_runner.go:195] Run: crio --version
	I1009 19:46:21.303825   46924 command_runner.go:130] > crio version 1.29.1
	I1009 19:46:21.303854   46924 command_runner.go:130] > Version:        1.29.1
	I1009 19:46:21.303863   46924 command_runner.go:130] > GitCommit:      unknown
	I1009 19:46:21.303869   46924 command_runner.go:130] > GitCommitDate:  unknown
	I1009 19:46:21.303876   46924 command_runner.go:130] > GitTreeState:   clean
	I1009 19:46:21.303888   46924 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1009 19:46:21.303896   46924 command_runner.go:130] > GoVersion:      go1.21.6
	I1009 19:46:21.303901   46924 command_runner.go:130] > Compiler:       gc
	I1009 19:46:21.303906   46924 command_runner.go:130] > Platform:       linux/amd64
	I1009 19:46:21.303910   46924 command_runner.go:130] > Linkmode:       dynamic
	I1009 19:46:21.303915   46924 command_runner.go:130] > BuildTags:      
	I1009 19:46:21.303923   46924 command_runner.go:130] >   containers_image_ostree_stub
	I1009 19:46:21.303927   46924 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1009 19:46:21.303936   46924 command_runner.go:130] >   btrfs_noversion
	I1009 19:46:21.303944   46924 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1009 19:46:21.303947   46924 command_runner.go:130] >   libdm_no_deferred_remove
	I1009 19:46:21.303951   46924 command_runner.go:130] >   seccomp
	I1009 19:46:21.303955   46924 command_runner.go:130] > LDFlags:          unknown
	I1009 19:46:21.303959   46924 command_runner.go:130] > SeccompEnabled:   true
	I1009 19:46:21.303963   46924 command_runner.go:130] > AppArmorEnabled:  false
	I1009 19:46:21.306103   46924 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:46:21.307363   46924 main.go:141] libmachine: (multinode-707643) Calling .GetIP
	I1009 19:46:21.310183   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:21.310542   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:21.310568   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:21.310773   46924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:46:21.314803   46924 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1009 19:46:21.315015   46924 kubeadm.go:883] updating cluster {Name:multinode-707643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-707643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.236 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:46:21.315168   46924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:46:21.315215   46924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:46:21.358094   46924 command_runner.go:130] > {
	I1009 19:46:21.358113   46924 command_runner.go:130] >   "images": [
	I1009 19:46:21.358117   46924 command_runner.go:130] >     {
	I1009 19:46:21.358131   46924 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1009 19:46:21.358136   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358141   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1009 19:46:21.358145   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358148   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358156   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1009 19:46:21.358163   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1009 19:46:21.358166   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358171   46924 command_runner.go:130] >       "size": "87190579",
	I1009 19:46:21.358174   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358178   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358182   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358186   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358190   46924 command_runner.go:130] >     },
	I1009 19:46:21.358198   46924 command_runner.go:130] >     {
	I1009 19:46:21.358203   46924 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1009 19:46:21.358214   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358219   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1009 19:46:21.358222   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358226   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358233   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1009 19:46:21.358240   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1009 19:46:21.358244   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358248   46924 command_runner.go:130] >       "size": "94965812",
	I1009 19:46:21.358254   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358262   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358276   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358280   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358283   46924 command_runner.go:130] >     },
	I1009 19:46:21.358286   46924 command_runner.go:130] >     {
	I1009 19:46:21.358291   46924 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1009 19:46:21.358296   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358303   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1009 19:46:21.358315   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358323   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358333   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1009 19:46:21.358346   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1009 19:46:21.358350   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358354   46924 command_runner.go:130] >       "size": "1363676",
	I1009 19:46:21.358358   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358362   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358366   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358370   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358375   46924 command_runner.go:130] >     },
	I1009 19:46:21.358383   46924 command_runner.go:130] >     {
	I1009 19:46:21.358391   46924 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:46:21.358400   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358408   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:46:21.358415   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358420   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358427   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:46:21.358450   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:46:21.358460   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358466   46924 command_runner.go:130] >       "size": "31470524",
	I1009 19:46:21.358475   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358481   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358490   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358496   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358504   46924 command_runner.go:130] >     },
	I1009 19:46:21.358509   46924 command_runner.go:130] >     {
	I1009 19:46:21.358520   46924 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1009 19:46:21.358528   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358536   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1009 19:46:21.358546   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358553   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358567   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1009 19:46:21.358590   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1009 19:46:21.358599   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358612   46924 command_runner.go:130] >       "size": "63273227",
	I1009 19:46:21.358621   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358627   46924 command_runner.go:130] >       "username": "nonroot",
	I1009 19:46:21.358633   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358637   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358643   46924 command_runner.go:130] >     },
	I1009 19:46:21.358646   46924 command_runner.go:130] >     {
	I1009 19:46:21.358655   46924 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1009 19:46:21.358665   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358673   46924 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1009 19:46:21.358681   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358688   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358701   46924 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1009 19:46:21.358715   46924 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1009 19:46:21.358723   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358729   46924 command_runner.go:130] >       "size": "149009664",
	I1009 19:46:21.358735   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.358741   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.358749   46924 command_runner.go:130] >       },
	I1009 19:46:21.358756   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358763   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358770   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358778   46924 command_runner.go:130] >     },
	I1009 19:46:21.358783   46924 command_runner.go:130] >     {
	I1009 19:46:21.358795   46924 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1009 19:46:21.358804   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358810   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1009 19:46:21.358814   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358818   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358832   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1009 19:46:21.358846   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1009 19:46:21.358860   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358869   46924 command_runner.go:130] >       "size": "95237600",
	I1009 19:46:21.358878   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.358886   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.358892   46924 command_runner.go:130] >       },
	I1009 19:46:21.358899   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358903   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358912   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358920   46924 command_runner.go:130] >     },
	I1009 19:46:21.358928   46924 command_runner.go:130] >     {
	I1009 19:46:21.358938   46924 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1009 19:46:21.358947   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358958   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1009 19:46:21.358967   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358974   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.359000   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1009 19:46:21.359017   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1009 19:46:21.359022   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359028   46924 command_runner.go:130] >       "size": "89437508",
	I1009 19:46:21.359036   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.359042   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.359050   46924 command_runner.go:130] >       },
	I1009 19:46:21.359057   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.359075   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.359082   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.359088   46924 command_runner.go:130] >     },
	I1009 19:46:21.359093   46924 command_runner.go:130] >     {
	I1009 19:46:21.359103   46924 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1009 19:46:21.359108   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.359116   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1009 19:46:21.359121   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359128   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.359140   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1009 19:46:21.359154   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1009 19:46:21.359160   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359166   46924 command_runner.go:130] >       "size": "92733849",
	I1009 19:46:21.359172   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.359178   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.359184   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.359190   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.359196   46924 command_runner.go:130] >     },
	I1009 19:46:21.359200   46924 command_runner.go:130] >     {
	I1009 19:46:21.359223   46924 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1009 19:46:21.359231   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.359236   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1009 19:46:21.359239   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359250   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.359266   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1009 19:46:21.359280   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1009 19:46:21.359288   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359296   46924 command_runner.go:130] >       "size": "68420934",
	I1009 19:46:21.359305   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.359314   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.359320   46924 command_runner.go:130] >       },
	I1009 19:46:21.359325   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.359333   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.359340   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.359348   46924 command_runner.go:130] >     },
	I1009 19:46:21.359359   46924 command_runner.go:130] >     {
	I1009 19:46:21.359370   46924 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1009 19:46:21.359379   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.359388   46924 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1009 19:46:21.359396   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359402   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.359411   46924 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1009 19:46:21.359424   46924 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1009 19:46:21.359438   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359447   46924 command_runner.go:130] >       "size": "742080",
	I1009 19:46:21.359456   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.359466   46924 command_runner.go:130] >         "value": "65535"
	I1009 19:46:21.359473   46924 command_runner.go:130] >       },
	I1009 19:46:21.359479   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.359485   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.359492   46924 command_runner.go:130] >       "pinned": true
	I1009 19:46:21.359495   46924 command_runner.go:130] >     }
	I1009 19:46:21.359502   46924 command_runner.go:130] >   ]
	I1009 19:46:21.359507   46924 command_runner.go:130] > }
	I1009 19:46:21.359744   46924 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:46:21.359757   46924 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:46:21.359812   46924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:46:21.393381   46924 command_runner.go:130] > {
	I1009 19:46:21.393409   46924 command_runner.go:130] >   "images": [
	I1009 19:46:21.393414   46924 command_runner.go:130] >     {
	I1009 19:46:21.393423   46924 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1009 19:46:21.393430   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.393439   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1009 19:46:21.393445   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393451   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.393467   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1009 19:46:21.393478   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1009 19:46:21.393482   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393486   46924 command_runner.go:130] >       "size": "87190579",
	I1009 19:46:21.393492   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.393496   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.393503   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.393511   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.393517   46924 command_runner.go:130] >     },
	I1009 19:46:21.393525   46924 command_runner.go:130] >     {
	I1009 19:46:21.393536   46924 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1009 19:46:21.393546   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.393683   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1009 19:46:21.393696   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393704   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.393715   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1009 19:46:21.393726   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1009 19:46:21.393735   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393741   46924 command_runner.go:130] >       "size": "94965812",
	I1009 19:46:21.393750   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.393769   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.393779   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.393788   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.393796   46924 command_runner.go:130] >     },
	I1009 19:46:21.393802   46924 command_runner.go:130] >     {
	I1009 19:46:21.393814   46924 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1009 19:46:21.393829   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.393840   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1009 19:46:21.393847   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393852   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.393867   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1009 19:46:21.393882   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1009 19:46:21.393890   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393900   46924 command_runner.go:130] >       "size": "1363676",
	I1009 19:46:21.393909   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.393918   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.393926   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.393932   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.393936   46924 command_runner.go:130] >     },
	I1009 19:46:21.393945   46924 command_runner.go:130] >     {
	I1009 19:46:21.393958   46924 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:46:21.393968   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.393979   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:46:21.393990   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393999   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394012   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:46:21.394032   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:46:21.394041   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394048   46924 command_runner.go:130] >       "size": "31470524",
	I1009 19:46:21.394057   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.394063   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394072   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394079   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394086   46924 command_runner.go:130] >     },
	I1009 19:46:21.394092   46924 command_runner.go:130] >     {
	I1009 19:46:21.394101   46924 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1009 19:46:21.394105   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394112   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1009 19:46:21.394119   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394133   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394147   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1009 19:46:21.394161   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1009 19:46:21.394170   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394177   46924 command_runner.go:130] >       "size": "63273227",
	I1009 19:46:21.394184   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.394188   46924 command_runner.go:130] >       "username": "nonroot",
	I1009 19:46:21.394195   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394201   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394214   46924 command_runner.go:130] >     },
	I1009 19:46:21.394222   46924 command_runner.go:130] >     {
	I1009 19:46:21.394234   46924 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1009 19:46:21.394243   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394251   46924 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1009 19:46:21.394259   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394266   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394275   46924 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1009 19:46:21.394287   46924 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1009 19:46:21.394296   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394306   46924 command_runner.go:130] >       "size": "149009664",
	I1009 19:46:21.394314   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.394324   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.394334   46924 command_runner.go:130] >       },
	I1009 19:46:21.394343   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394350   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394357   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394361   46924 command_runner.go:130] >     },
	I1009 19:46:21.394367   46924 command_runner.go:130] >     {
	I1009 19:46:21.394377   46924 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1009 19:46:21.394385   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394397   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1009 19:46:21.394405   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394414   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394436   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1009 19:46:21.394446   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1009 19:46:21.394450   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394458   46924 command_runner.go:130] >       "size": "95237600",
	I1009 19:46:21.394468   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.394477   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.394485   46924 command_runner.go:130] >       },
	I1009 19:46:21.394492   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394501   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394509   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394517   46924 command_runner.go:130] >     },
	I1009 19:46:21.394523   46924 command_runner.go:130] >     {
	I1009 19:46:21.394532   46924 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1009 19:46:21.394536   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394547   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1009 19:46:21.394555   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394564   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394594   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1009 19:46:21.394609   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1009 19:46:21.394613   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394619   46924 command_runner.go:130] >       "size": "89437508",
	I1009 19:46:21.394623   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.394631   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.394640   46924 command_runner.go:130] >       },
	I1009 19:46:21.394650   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394659   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394668   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394676   46924 command_runner.go:130] >     },
	I1009 19:46:21.394682   46924 command_runner.go:130] >     {
	I1009 19:46:21.394694   46924 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1009 19:46:21.394701   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394706   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1009 19:46:21.394713   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394725   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394740   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1009 19:46:21.394754   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1009 19:46:21.394764   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394774   46924 command_runner.go:130] >       "size": "92733849",
	I1009 19:46:21.394780   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.394787   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394791   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394795   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394803   46924 command_runner.go:130] >     },
	I1009 19:46:21.394812   46924 command_runner.go:130] >     {
	I1009 19:46:21.394825   46924 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1009 19:46:21.394834   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394848   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1009 19:46:21.394856   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394863   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394874   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1009 19:46:21.394887   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1009 19:46:21.394896   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394905   46924 command_runner.go:130] >       "size": "68420934",
	I1009 19:46:21.394914   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.394923   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.394932   46924 command_runner.go:130] >       },
	I1009 19:46:21.394940   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394947   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394955   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394959   46924 command_runner.go:130] >     },
	I1009 19:46:21.394962   46924 command_runner.go:130] >     {
	I1009 19:46:21.394973   46924 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1009 19:46:21.394981   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394993   46924 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1009 19:46:21.395001   46924 command_runner.go:130] >       ],
	I1009 19:46:21.395010   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.395031   46924 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1009 19:46:21.395043   46924 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1009 19:46:21.395049   46924 command_runner.go:130] >       ],
	I1009 19:46:21.395055   46924 command_runner.go:130] >       "size": "742080",
	I1009 19:46:21.395077   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.395084   46924 command_runner.go:130] >         "value": "65535"
	I1009 19:46:21.395091   46924 command_runner.go:130] >       },
	I1009 19:46:21.395097   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.395105   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.395112   46924 command_runner.go:130] >       "pinned": true
	I1009 19:46:21.395119   46924 command_runner.go:130] >     }
	I1009 19:46:21.395124   46924 command_runner.go:130] >   ]
	I1009 19:46:21.395137   46924 command_runner.go:130] > }
	I1009 19:46:21.395311   46924 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:46:21.395324   46924 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:46:21.395333   46924 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.1 crio true true} ...
	I1009 19:46:21.395441   46924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-707643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-707643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:46:21.395527   46924 ssh_runner.go:195] Run: crio config
	I1009 19:46:21.427794   46924 command_runner.go:130] ! time="2024-10-09 19:46:21.403713141Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1009 19:46:21.433025   46924 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 19:46:21.439362   46924 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 19:46:21.439383   46924 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 19:46:21.439389   46924 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 19:46:21.439393   46924 command_runner.go:130] > #
	I1009 19:46:21.439402   46924 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 19:46:21.439410   46924 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 19:46:21.439420   46924 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 19:46:21.439433   46924 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 19:46:21.439443   46924 command_runner.go:130] > # reload'.
	I1009 19:46:21.439454   46924 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 19:46:21.439465   46924 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 19:46:21.439478   46924 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 19:46:21.439487   46924 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 19:46:21.439494   46924 command_runner.go:130] > [crio]
	I1009 19:46:21.439504   46924 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 19:46:21.439515   46924 command_runner.go:130] > # containers images, in this directory.
	I1009 19:46:21.439523   46924 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1009 19:46:21.439540   46924 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 19:46:21.439550   46924 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1009 19:46:21.439563   46924 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 19:46:21.439573   46924 command_runner.go:130] > # imagestore = ""
	I1009 19:46:21.439583   46924 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 19:46:21.439595   46924 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 19:46:21.439606   46924 command_runner.go:130] > storage_driver = "overlay"
	I1009 19:46:21.439614   46924 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 19:46:21.439626   46924 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 19:46:21.439639   46924 command_runner.go:130] > storage_option = [
	I1009 19:46:21.439659   46924 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1009 19:46:21.439670   46924 command_runner.go:130] > ]
	I1009 19:46:21.439679   46924 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 19:46:21.439690   46924 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 19:46:21.439702   46924 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 19:46:21.439714   46924 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 19:46:21.439727   46924 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 19:46:21.439736   46924 command_runner.go:130] > # always happen on a node reboot
	I1009 19:46:21.439747   46924 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 19:46:21.439761   46924 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 19:46:21.439769   46924 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 19:46:21.439779   46924 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 19:46:21.439790   46924 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1009 19:46:21.439802   46924 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 19:46:21.439817   46924 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 19:46:21.439826   46924 command_runner.go:130] > # internal_wipe = true
	I1009 19:46:21.439841   46924 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 19:46:21.439852   46924 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 19:46:21.439860   46924 command_runner.go:130] > # internal_repair = false
	I1009 19:46:21.439865   46924 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 19:46:21.439878   46924 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 19:46:21.439889   46924 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 19:46:21.439900   46924 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 19:46:21.439912   46924 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 19:46:21.439921   46924 command_runner.go:130] > [crio.api]
	I1009 19:46:21.439932   46924 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 19:46:21.439942   46924 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 19:46:21.439953   46924 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 19:46:21.439960   46924 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 19:46:21.439969   46924 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 19:46:21.439980   46924 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 19:46:21.439989   46924 command_runner.go:130] > # stream_port = "0"
	I1009 19:46:21.440000   46924 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 19:46:21.440011   46924 command_runner.go:130] > # stream_enable_tls = false
	I1009 19:46:21.440022   46924 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 19:46:21.440031   46924 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 19:46:21.440040   46924 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 19:46:21.440054   46924 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1009 19:46:21.440063   46924 command_runner.go:130] > # minutes.
	I1009 19:46:21.440069   46924 command_runner.go:130] > # stream_tls_cert = ""
	I1009 19:46:21.440082   46924 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 19:46:21.440095   46924 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1009 19:46:21.440104   46924 command_runner.go:130] > # stream_tls_key = ""
	I1009 19:46:21.440113   46924 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 19:46:21.440125   46924 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 19:46:21.440142   46924 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1009 19:46:21.440151   46924 command_runner.go:130] > # stream_tls_ca = ""
	I1009 19:46:21.440165   46924 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:46:21.440174   46924 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1009 19:46:21.440188   46924 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:46:21.440198   46924 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1009 19:46:21.440210   46924 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 19:46:21.440218   46924 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 19:46:21.440226   46924 command_runner.go:130] > [crio.runtime]
	I1009 19:46:21.440234   46924 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 19:46:21.440245   46924 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 19:46:21.440252   46924 command_runner.go:130] > # "nofile=1024:2048"
	I1009 19:46:21.440267   46924 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 19:46:21.440277   46924 command_runner.go:130] > # default_ulimits = [
	I1009 19:46:21.440283   46924 command_runner.go:130] > # ]
	I1009 19:46:21.440294   46924 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 19:46:21.440303   46924 command_runner.go:130] > # no_pivot = false
	I1009 19:46:21.440312   46924 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 19:46:21.440322   46924 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 19:46:21.440332   46924 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 19:46:21.440343   46924 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 19:46:21.440356   46924 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 19:46:21.440368   46924 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:46:21.440379   46924 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1009 19:46:21.440388   46924 command_runner.go:130] > # Cgroup setting for conmon
	I1009 19:46:21.440401   46924 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 19:46:21.440409   46924 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 19:46:21.440415   46924 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 19:46:21.440425   46924 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 19:46:21.440441   46924 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:46:21.440450   46924 command_runner.go:130] > conmon_env = [
	I1009 19:46:21.440462   46924 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1009 19:46:21.440469   46924 command_runner.go:130] > ]
	I1009 19:46:21.440478   46924 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 19:46:21.440489   46924 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 19:46:21.440499   46924 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 19:46:21.440506   46924 command_runner.go:130] > # default_env = [
	I1009 19:46:21.440512   46924 command_runner.go:130] > # ]
	I1009 19:46:21.440524   46924 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 19:46:21.440538   46924 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 19:46:21.440548   46924 command_runner.go:130] > # selinux = false
	I1009 19:46:21.440560   46924 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 19:46:21.440572   46924 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1009 19:46:21.440584   46924 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1009 19:46:21.440593   46924 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:46:21.440602   46924 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1009 19:46:21.440611   46924 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1009 19:46:21.440622   46924 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1009 19:46:21.440632   46924 command_runner.go:130] > # which might increase security.
	I1009 19:46:21.440639   46924 command_runner.go:130] > # This option is currently deprecated,
	I1009 19:46:21.440652   46924 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1009 19:46:21.440662   46924 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1009 19:46:21.440675   46924 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 19:46:21.440687   46924 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 19:46:21.440699   46924 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 19:46:21.440709   46924 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 19:46:21.440719   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.440729   46924 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 19:46:21.440737   46924 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 19:46:21.440748   46924 command_runner.go:130] > # the cgroup blockio controller.
	I1009 19:46:21.440757   46924 command_runner.go:130] > # blockio_config_file = ""
	I1009 19:46:21.440770   46924 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 19:46:21.440779   46924 command_runner.go:130] > # blockio parameters.
	I1009 19:46:21.440788   46924 command_runner.go:130] > # blockio_reload = false
	I1009 19:46:21.440798   46924 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 19:46:21.440804   46924 command_runner.go:130] > # irqbalance daemon.
	I1009 19:46:21.440812   46924 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 19:46:21.440827   46924 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 19:46:21.440841   46924 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 19:46:21.440853   46924 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 19:46:21.440866   46924 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 19:46:21.440878   46924 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 19:46:21.440886   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.440892   46924 command_runner.go:130] > # rdt_config_file = ""
	I1009 19:46:21.440900   46924 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 19:46:21.440910   46924 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1009 19:46:21.440937   46924 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 19:46:21.440946   46924 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 19:46:21.440959   46924 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 19:46:21.440971   46924 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 19:46:21.440978   46924 command_runner.go:130] > # will be added.
	I1009 19:46:21.440982   46924 command_runner.go:130] > # default_capabilities = [
	I1009 19:46:21.440989   46924 command_runner.go:130] > # 	"CHOWN",
	I1009 19:46:21.440994   46924 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 19:46:21.441003   46924 command_runner.go:130] > # 	"FSETID",
	I1009 19:46:21.441009   46924 command_runner.go:130] > # 	"FOWNER",
	I1009 19:46:21.441018   46924 command_runner.go:130] > # 	"SETGID",
	I1009 19:46:21.441027   46924 command_runner.go:130] > # 	"SETUID",
	I1009 19:46:21.441033   46924 command_runner.go:130] > # 	"SETPCAP",
	I1009 19:46:21.441040   46924 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 19:46:21.441048   46924 command_runner.go:130] > # 	"KILL",
	I1009 19:46:21.441053   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441067   46924 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 19:46:21.441076   46924 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 19:46:21.441082   46924 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 19:46:21.441094   46924 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 19:46:21.441105   46924 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:46:21.441112   46924 command_runner.go:130] > default_sysctls = [
	I1009 19:46:21.441122   46924 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 19:46:21.441127   46924 command_runner.go:130] > ]
	I1009 19:46:21.441137   46924 command_runner.go:130] > # List of devices on the host that a
	I1009 19:46:21.441149   46924 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 19:46:21.441158   46924 command_runner.go:130] > # allowed_devices = [
	I1009 19:46:21.441167   46924 command_runner.go:130] > # 	"/dev/fuse",
	I1009 19:46:21.441174   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441179   46924 command_runner.go:130] > # List of additional devices. specified as
	I1009 19:46:21.441192   46924 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 19:46:21.441204   46924 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 19:46:21.441219   46924 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:46:21.441228   46924 command_runner.go:130] > # additional_devices = [
	I1009 19:46:21.441236   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441244   46924 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 19:46:21.441253   46924 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 19:46:21.441265   46924 command_runner.go:130] > # 	"/etc/cdi",
	I1009 19:46:21.441272   46924 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 19:46:21.441276   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441285   46924 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 19:46:21.441298   46924 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 19:46:21.441306   46924 command_runner.go:130] > # Defaults to false.
	I1009 19:46:21.441317   46924 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 19:46:21.441332   46924 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 19:46:21.441344   46924 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 19:46:21.441352   46924 command_runner.go:130] > # hooks_dir = [
	I1009 19:46:21.441360   46924 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 19:46:21.441363   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441375   46924 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 19:46:21.441387   46924 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 19:46:21.441399   46924 command_runner.go:130] > # its default mounts from the following two files:
	I1009 19:46:21.441407   46924 command_runner.go:130] > #
	I1009 19:46:21.441419   46924 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 19:46:21.441432   46924 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 19:46:21.441443   46924 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 19:46:21.441450   46924 command_runner.go:130] > #
	I1009 19:46:21.441456   46924 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 19:46:21.441468   46924 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 19:46:21.441480   46924 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 19:46:21.441491   46924 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 19:46:21.441499   46924 command_runner.go:130] > #
	I1009 19:46:21.441509   46924 command_runner.go:130] > # default_mounts_file = ""
	I1009 19:46:21.441521   46924 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 19:46:21.441534   46924 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 19:46:21.441542   46924 command_runner.go:130] > pids_limit = 1024
	I1009 19:46:21.441550   46924 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 19:46:21.441561   46924 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 19:46:21.441574   46924 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 19:46:21.441589   46924 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 19:46:21.441598   46924 command_runner.go:130] > # log_size_max = -1
	I1009 19:46:21.441611   46924 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 19:46:21.441625   46924 command_runner.go:130] > # log_to_journald = false
	I1009 19:46:21.441634   46924 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 19:46:21.441643   46924 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 19:46:21.441654   46924 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 19:46:21.441665   46924 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 19:46:21.441675   46924 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 19:46:21.441684   46924 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 19:46:21.441696   46924 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 19:46:21.441705   46924 command_runner.go:130] > # read_only = false
	I1009 19:46:21.441717   46924 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 19:46:21.441729   46924 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 19:46:21.441735   46924 command_runner.go:130] > # live configuration reload.
	I1009 19:46:21.441739   46924 command_runner.go:130] > # log_level = "info"
	I1009 19:46:21.441751   46924 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 19:46:21.441762   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.441772   46924 command_runner.go:130] > # log_filter = ""
	I1009 19:46:21.441783   46924 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 19:46:21.441797   46924 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 19:46:21.441806   46924 command_runner.go:130] > # separated by comma.
	I1009 19:46:21.441820   46924 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:46:21.441835   46924 command_runner.go:130] > # uid_mappings = ""
	I1009 19:46:21.441849   46924 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 19:46:21.441862   46924 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 19:46:21.441871   46924 command_runner.go:130] > # separated by comma.
	I1009 19:46:21.441885   46924 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:46:21.441894   46924 command_runner.go:130] > # gid_mappings = ""
	I1009 19:46:21.441906   46924 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 19:46:21.441919   46924 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:46:21.441928   46924 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:46:21.441939   46924 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:46:21.441957   46924 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 19:46:21.441971   46924 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 19:46:21.441984   46924 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:46:21.441996   46924 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:46:21.442010   46924 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:46:21.442017   46924 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 19:46:21.442029   46924 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 19:46:21.442041   46924 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 19:46:21.442056   46924 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 19:46:21.442065   46924 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 19:46:21.442077   46924 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 19:46:21.442089   46924 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 19:46:21.442100   46924 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 19:46:21.442107   46924 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 19:46:21.442111   46924 command_runner.go:130] > drop_infra_ctr = false
	I1009 19:46:21.442119   46924 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 19:46:21.442131   46924 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 19:46:21.442143   46924 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 19:46:21.442152   46924 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 19:46:21.442163   46924 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 19:46:21.442179   46924 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 19:46:21.442189   46924 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 19:46:21.442197   46924 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 19:46:21.442202   46924 command_runner.go:130] > # shared_cpuset = ""
	I1009 19:46:21.442214   46924 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 19:46:21.442225   46924 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 19:46:21.442232   46924 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 19:46:21.442245   46924 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 19:46:21.442260   46924 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1009 19:46:21.442272   46924 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 19:46:21.442284   46924 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 19:46:21.442292   46924 command_runner.go:130] > # enable_criu_support = false
	I1009 19:46:21.442299   46924 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 19:46:21.442310   46924 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 19:46:21.442321   46924 command_runner.go:130] > # enable_pod_events = false
	I1009 19:46:21.442331   46924 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:46:21.442343   46924 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:46:21.442353   46924 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 19:46:21.442363   46924 command_runner.go:130] > # default_runtime = "runc"
	I1009 19:46:21.442373   46924 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 19:46:21.442387   46924 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 19:46:21.442401   46924 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 19:46:21.442415   46924 command_runner.go:130] > # creation as a file is not desired either.
	I1009 19:46:21.442430   46924 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 19:46:21.442442   46924 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 19:46:21.442452   46924 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 19:46:21.442459   46924 command_runner.go:130] > # ]
	I1009 19:46:21.442469   46924 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 19:46:21.442481   46924 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 19:46:21.442491   46924 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 19:46:21.442499   46924 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 19:46:21.442504   46924 command_runner.go:130] > #
	I1009 19:46:21.442514   46924 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 19:46:21.442524   46924 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 19:46:21.442577   46924 command_runner.go:130] > # runtime_type = "oci"
	I1009 19:46:21.442588   46924 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 19:46:21.442596   46924 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 19:46:21.442601   46924 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 19:46:21.442611   46924 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 19:46:21.442620   46924 command_runner.go:130] > # monitor_env = []
	I1009 19:46:21.442628   46924 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 19:46:21.442638   46924 command_runner.go:130] > # allowed_annotations = []
	I1009 19:46:21.442649   46924 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 19:46:21.442657   46924 command_runner.go:130] > # Where:
	I1009 19:46:21.442669   46924 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 19:46:21.442681   46924 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 19:46:21.442691   46924 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 19:46:21.442700   46924 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 19:46:21.442709   46924 command_runner.go:130] > #   in $PATH.
	I1009 19:46:21.442722   46924 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 19:46:21.442733   46924 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 19:46:21.442746   46924 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 19:46:21.442754   46924 command_runner.go:130] > #   state.
	I1009 19:46:21.442765   46924 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 19:46:21.442778   46924 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 19:46:21.442788   46924 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 19:46:21.442796   46924 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 19:46:21.442809   46924 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 19:46:21.442822   46924 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 19:46:21.442837   46924 command_runner.go:130] > #   The currently recognized values are:
	I1009 19:46:21.442850   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 19:46:21.442864   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 19:46:21.442876   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 19:46:21.442886   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 19:46:21.442896   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 19:46:21.442909   46924 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 19:46:21.442922   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 19:46:21.442935   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 19:46:21.442947   46924 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 19:46:21.442959   46924 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 19:46:21.442969   46924 command_runner.go:130] > #   deprecated option "conmon".
	I1009 19:46:21.442983   46924 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 19:46:21.442990   46924 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 19:46:21.442999   46924 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 19:46:21.443009   46924 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 19:46:21.443023   46924 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1009 19:46:21.443034   46924 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 19:46:21.443046   46924 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 19:46:21.443057   46924 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 19:46:21.443073   46924 command_runner.go:130] > #
	I1009 19:46:21.443083   46924 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 19:46:21.443092   46924 command_runner.go:130] > #
	I1009 19:46:21.443201   46924 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 19:46:21.443222   46924 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 19:46:21.443233   46924 command_runner.go:130] > #
	I1009 19:46:21.443247   46924 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 19:46:21.443261   46924 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 19:46:21.443274   46924 command_runner.go:130] > #
	I1009 19:46:21.443289   46924 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 19:46:21.443299   46924 command_runner.go:130] > # feature.
	I1009 19:46:21.443307   46924 command_runner.go:130] > #
	I1009 19:46:21.443317   46924 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 19:46:21.443331   46924 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 19:46:21.443346   46924 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 19:46:21.443368   46924 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 19:46:21.443421   46924 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 19:46:21.443437   46924 command_runner.go:130] > #
	I1009 19:46:21.443451   46924 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 19:46:21.443463   46924 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 19:46:21.443472   46924 command_runner.go:130] > #
	I1009 19:46:21.443485   46924 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 19:46:21.443499   46924 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 19:46:21.443507   46924 command_runner.go:130] > #
	I1009 19:46:21.443520   46924 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 19:46:21.443532   46924 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 19:46:21.443541   46924 command_runner.go:130] > # limitation.
	I1009 19:46:21.443554   46924 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 19:46:21.443564   46924 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1009 19:46:21.443574   46924 command_runner.go:130] > runtime_type = "oci"
	I1009 19:46:21.443584   46924 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 19:46:21.443593   46924 command_runner.go:130] > runtime_config_path = ""
	I1009 19:46:21.443604   46924 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:46:21.443613   46924 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:46:21.443620   46924 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:46:21.443624   46924 command_runner.go:130] > monitor_env = [
	I1009 19:46:21.443636   46924 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1009 19:46:21.443645   46924 command_runner.go:130] > ]
	I1009 19:46:21.443653   46924 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:46:21.443667   46924 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 19:46:21.443678   46924 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 19:46:21.443695   46924 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 19:46:21.443709   46924 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 19:46:21.443719   46924 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1009 19:46:21.443731   46924 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 19:46:21.443749   46924 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 19:46:21.443765   46924 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 19:46:21.443777   46924 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 19:46:21.443787   46924 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 19:46:21.443793   46924 command_runner.go:130] > # Example:
	I1009 19:46:21.443800   46924 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 19:46:21.443806   46924 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 19:46:21.443810   46924 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 19:46:21.443825   46924 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 19:46:21.443831   46924 command_runner.go:130] > # cpuset = 0
	I1009 19:46:21.443838   46924 command_runner.go:130] > # cpushares = "0-1"
	I1009 19:46:21.443844   46924 command_runner.go:130] > # Where:
	I1009 19:46:21.443851   46924 command_runner.go:130] > # The workload name is workload-type.
	I1009 19:46:21.443861   46924 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 19:46:21.443870   46924 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 19:46:21.443879   46924 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 19:46:21.443890   46924 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 19:46:21.443895   46924 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 19:46:21.443900   46924 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 19:46:21.443910   46924 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 19:46:21.443916   46924 command_runner.go:130] > # Default value is set to true
	I1009 19:46:21.443924   46924 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 19:46:21.443933   46924 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 19:46:21.443940   46924 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 19:46:21.443947   46924 command_runner.go:130] > # Default value is set to 'false'
	I1009 19:46:21.443955   46924 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 19:46:21.443968   46924 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 19:46:21.443975   46924 command_runner.go:130] > #
	I1009 19:46:21.443981   46924 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 19:46:21.443994   46924 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1009 19:46:21.444008   46924 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1009 19:46:21.444021   46924 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1009 19:46:21.444033   46924 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1009 19:46:21.444042   46924 command_runner.go:130] > [crio.image]
	I1009 19:46:21.444051   46924 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 19:46:21.444060   46924 command_runner.go:130] > # default_transport = "docker://"
	I1009 19:46:21.444071   46924 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 19:46:21.444079   46924 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:46:21.444085   46924 command_runner.go:130] > # global_auth_file = ""
	I1009 19:46:21.444096   46924 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 19:46:21.444104   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.444116   46924 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1009 19:46:21.444127   46924 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 19:46:21.444139   46924 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:46:21.444151   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.444163   46924 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 19:46:21.444171   46924 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 19:46:21.444180   46924 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 19:46:21.444193   46924 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 19:46:21.444220   46924 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 19:46:21.444230   46924 command_runner.go:130] > # pause_command = "/pause"
	I1009 19:46:21.444242   46924 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 19:46:21.444254   46924 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 19:46:21.444263   46924 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 19:46:21.444278   46924 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 19:46:21.444291   46924 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 19:46:21.444304   46924 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 19:46:21.444314   46924 command_runner.go:130] > # pinned_images = [
	I1009 19:46:21.444319   46924 command_runner.go:130] > # ]
	I1009 19:46:21.444331   46924 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 19:46:21.444346   46924 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 19:46:21.444357   46924 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 19:46:21.444371   46924 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 19:46:21.444383   46924 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 19:46:21.444393   46924 command_runner.go:130] > # signature_policy = ""
	I1009 19:46:21.444406   46924 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 19:46:21.444419   46924 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 19:46:21.444431   46924 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 19:46:21.444444   46924 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 19:46:21.444454   46924 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 19:46:21.444463   46924 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 19:46:21.444476   46924 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 19:46:21.444489   46924 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 19:46:21.444498   46924 command_runner.go:130] > # changing them here.
	I1009 19:46:21.444508   46924 command_runner.go:130] > # insecure_registries = [
	I1009 19:46:21.444516   46924 command_runner.go:130] > # ]
	I1009 19:46:21.444526   46924 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 19:46:21.444534   46924 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 19:46:21.444538   46924 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 19:46:21.444545   46924 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 19:46:21.444555   46924 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 19:46:21.444571   46924 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 19:46:21.444580   46924 command_runner.go:130] > # CNI plugins.
	I1009 19:46:21.444589   46924 command_runner.go:130] > [crio.network]
	I1009 19:46:21.444602   46924 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 19:46:21.444616   46924 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 19:46:21.444625   46924 command_runner.go:130] > # cni_default_network = ""
	I1009 19:46:21.444635   46924 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 19:46:21.444641   46924 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 19:46:21.444650   46924 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 19:46:21.444659   46924 command_runner.go:130] > # plugin_dirs = [
	I1009 19:46:21.444669   46924 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 19:46:21.444677   46924 command_runner.go:130] > # ]
	I1009 19:46:21.444689   46924 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 19:46:21.444698   46924 command_runner.go:130] > [crio.metrics]
	I1009 19:46:21.444714   46924 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 19:46:21.444722   46924 command_runner.go:130] > enable_metrics = true
	I1009 19:46:21.444726   46924 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 19:46:21.444735   46924 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 19:46:21.444747   46924 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 19:46:21.444760   46924 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 19:46:21.444772   46924 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 19:46:21.444781   46924 command_runner.go:130] > # metrics_collectors = [
	I1009 19:46:21.444790   46924 command_runner.go:130] > # 	"operations",
	I1009 19:46:21.444800   46924 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1009 19:46:21.444809   46924 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1009 19:46:21.444816   46924 command_runner.go:130] > # 	"operations_errors",
	I1009 19:46:21.444823   46924 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1009 19:46:21.444832   46924 command_runner.go:130] > # 	"image_pulls_by_name",
	I1009 19:46:21.444842   46924 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1009 19:46:21.444852   46924 command_runner.go:130] > # 	"image_pulls_failures",
	I1009 19:46:21.444862   46924 command_runner.go:130] > # 	"image_pulls_successes",
	I1009 19:46:21.444870   46924 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 19:46:21.444880   46924 command_runner.go:130] > # 	"image_layer_reuse",
	I1009 19:46:21.444890   46924 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 19:46:21.444897   46924 command_runner.go:130] > # 	"containers_oom_total",
	I1009 19:46:21.444901   46924 command_runner.go:130] > # 	"containers_oom",
	I1009 19:46:21.444905   46924 command_runner.go:130] > # 	"processes_defunct",
	I1009 19:46:21.444914   46924 command_runner.go:130] > # 	"operations_total",
	I1009 19:46:21.444924   46924 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 19:46:21.444931   46924 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 19:46:21.444941   46924 command_runner.go:130] > # 	"operations_errors_total",
	I1009 19:46:21.444951   46924 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 19:46:21.444962   46924 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 19:46:21.444971   46924 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 19:46:21.444982   46924 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 19:46:21.444992   46924 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 19:46:21.444998   46924 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 19:46:21.445007   46924 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 19:46:21.445016   46924 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 19:46:21.445022   46924 command_runner.go:130] > # ]
	I1009 19:46:21.445033   46924 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 19:46:21.445042   46924 command_runner.go:130] > # metrics_port = 9090
	I1009 19:46:21.445052   46924 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 19:46:21.445061   46924 command_runner.go:130] > # metrics_socket = ""
	I1009 19:46:21.445072   46924 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 19:46:21.445084   46924 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 19:46:21.445093   46924 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 19:46:21.445103   46924 command_runner.go:130] > # certificate on any modification event.
	I1009 19:46:21.445113   46924 command_runner.go:130] > # metrics_cert = ""
	I1009 19:46:21.445122   46924 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 19:46:21.445133   46924 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 19:46:21.445143   46924 command_runner.go:130] > # metrics_key = ""
	I1009 19:46:21.445154   46924 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 19:46:21.445163   46924 command_runner.go:130] > [crio.tracing]
	I1009 19:46:21.445174   46924 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 19:46:21.445182   46924 command_runner.go:130] > # enable_tracing = false
	I1009 19:46:21.445187   46924 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 19:46:21.445197   46924 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1009 19:46:21.445216   46924 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 19:46:21.445226   46924 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 19:46:21.445236   46924 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 19:46:21.445245   46924 command_runner.go:130] > [crio.nri]
	I1009 19:46:21.445254   46924 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 19:46:21.445264   46924 command_runner.go:130] > # enable_nri = false
	I1009 19:46:21.445273   46924 command_runner.go:130] > # NRI socket to listen on.
	I1009 19:46:21.445281   46924 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 19:46:21.445285   46924 command_runner.go:130] > # NRI plugin directory to use.
	I1009 19:46:21.445294   46924 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 19:46:21.445306   46924 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 19:46:21.445315   46924 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 19:46:21.445331   46924 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 19:46:21.445340   46924 command_runner.go:130] > # nri_disable_connections = false
	I1009 19:46:21.445351   46924 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 19:46:21.445361   46924 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 19:46:21.445371   46924 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 19:46:21.445378   46924 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 19:46:21.445386   46924 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 19:46:21.445395   46924 command_runner.go:130] > [crio.stats]
	I1009 19:46:21.445410   46924 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 19:46:21.445421   46924 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 19:46:21.445430   46924 command_runner.go:130] > # stats_collection_period = 0
	I1009 19:46:21.445530   46924 cni.go:84] Creating CNI manager for ""
	I1009 19:46:21.445544   46924 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1009 19:46:21.445559   46924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:46:21.445590   46924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-707643 NodeName:multinode-707643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:46:21.445731   46924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-707643"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:46:21.445799   46924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:46:21.455571   46924 command_runner.go:130] > kubeadm
	I1009 19:46:21.455588   46924 command_runner.go:130] > kubectl
	I1009 19:46:21.455593   46924 command_runner.go:130] > kubelet
	I1009 19:46:21.455608   46924 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:46:21.455652   46924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:46:21.464750   46924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1009 19:46:21.480502   46924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:46:21.495895   46924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1009 19:46:21.511503   46924 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I1009 19:46:21.515297   46924 command_runner.go:130] > 192.168.39.10	control-plane.minikube.internal
	I1009 19:46:21.515347   46924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:46:21.650553   46924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:46:21.666406   46924 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643 for IP: 192.168.39.10
	I1009 19:46:21.666431   46924 certs.go:194] generating shared ca certs ...
	I1009 19:46:21.666449   46924 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:46:21.666621   46924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:46:21.666671   46924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:46:21.666684   46924 certs.go:256] generating profile certs ...
	I1009 19:46:21.666794   46924 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/client.key
	I1009 19:46:21.666865   46924 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.key.ba20182f
	I1009 19:46:21.666909   46924 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.key
	I1009 19:46:21.666923   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:46:21.666941   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:46:21.666958   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:46:21.666975   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:46:21.666991   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:46:21.667007   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:46:21.667026   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:46:21.667044   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:46:21.667198   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:46:21.667244   46924 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:46:21.667259   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:46:21.667294   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:46:21.667324   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:46:21.667357   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:46:21.667408   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:46:21.667445   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.667465   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.667483   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:46:21.668076   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:46:21.691784   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:46:21.713873   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:46:21.737163   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:46:21.759741   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:46:21.782475   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:46:21.805295   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:46:21.830327   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:46:21.853493   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:46:21.876808   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:46:21.899853   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:46:21.923113   46924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:46:21.939346   46924 ssh_runner.go:195] Run: openssl version
	I1009 19:46:21.945688   46924 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1009 19:46:21.945771   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:46:21.956371   46924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.960732   46924 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.960878   46924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.960929   46924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.966563   46924 command_runner.go:130] > b5213941
	I1009 19:46:21.966637   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:46:21.975864   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:46:21.986710   46924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.990998   46924 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.991026   46924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.991058   46924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.996622   46924 command_runner.go:130] > 51391683
	I1009 19:46:21.996703   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:46:22.005602   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:46:22.015951   46924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:46:22.020231   46924 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:46:22.020252   46924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:46:22.020286   46924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:46:22.025527   46924 command_runner.go:130] > 3ec20f2e
	I1009 19:46:22.025713   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:46:22.034354   46924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:46:22.038422   46924 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:46:22.038442   46924 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 19:46:22.038451   46924 command_runner.go:130] > Device: 253,1	Inode: 8384040     Links: 1
	I1009 19:46:22.038461   46924 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:46:22.038471   46924 command_runner.go:130] > Access: 2024-10-09 19:39:36.626993158 +0000
	I1009 19:46:22.038481   46924 command_runner.go:130] > Modify: 2024-10-09 19:39:36.626993158 +0000
	I1009 19:46:22.038489   46924 command_runner.go:130] > Change: 2024-10-09 19:39:36.626993158 +0000
	I1009 19:46:22.038499   46924 command_runner.go:130] >  Birth: 2024-10-09 19:39:36.626993158 +0000
	I1009 19:46:22.038649   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:46:22.044086   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.044148   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:46:22.049449   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.049505   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:46:22.054726   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.054930   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:46:22.060446   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.060510   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:46:22.066485   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.066544   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:46:22.072059   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.072133   46924 kubeadm.go:392] StartCluster: {Name:multinode-707643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-707643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.236 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:46:22.072273   46924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:46:22.072339   46924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:46:22.107184   46924 command_runner.go:130] > 137f5e5991a475816f0f2fd1ae1900dccb0ee8955add3ac300727bbfd51b268e
	I1009 19:46:22.107213   46924 command_runner.go:130] > 791a11c2b24e7aed80bbfe294018d137cdf325edc2ad6b7a05b00abab933fada
	I1009 19:46:22.107221   46924 command_runner.go:130] > 11e750a0fe4b254cb90bf3513caf97a0129c0866c7f35eea2190de27e986f9b7
	I1009 19:46:22.107228   46924 command_runner.go:130] > cf9bee6158f723ce4f9a1a4c961b79ab623bda893a438e809a36c0381b6ddbfb
	I1009 19:46:22.107234   46924 command_runner.go:130] > e48ae1f37a3b271ac5db564c8ee7e4e27ba72a41a7c53e9c2e6081ba9d9c21e8
	I1009 19:46:22.107239   46924 command_runner.go:130] > 9fd8c438bbd4f205686715faf46ce3310a5b05263c9bb6183f8733657747a4c1
	I1009 19:46:22.107244   46924 command_runner.go:130] > dda813262aeb7c542d6c02c55697b41375e79273caf8095406901f40a1563fec
	I1009 19:46:22.107252   46924 command_runner.go:130] > be275566ef7e4f33148de7fac7e82f5811095be11b8d5a2501f04768feefc372
	I1009 19:46:22.108628   46924 cri.go:89] found id: "137f5e5991a475816f0f2fd1ae1900dccb0ee8955add3ac300727bbfd51b268e"
	I1009 19:46:22.108651   46924 cri.go:89] found id: "791a11c2b24e7aed80bbfe294018d137cdf325edc2ad6b7a05b00abab933fada"
	I1009 19:46:22.108657   46924 cri.go:89] found id: "11e750a0fe4b254cb90bf3513caf97a0129c0866c7f35eea2190de27e986f9b7"
	I1009 19:46:22.108662   46924 cri.go:89] found id: "cf9bee6158f723ce4f9a1a4c961b79ab623bda893a438e809a36c0381b6ddbfb"
	I1009 19:46:22.108666   46924 cri.go:89] found id: "e48ae1f37a3b271ac5db564c8ee7e4e27ba72a41a7c53e9c2e6081ba9d9c21e8"
	I1009 19:46:22.108670   46924 cri.go:89] found id: "9fd8c438bbd4f205686715faf46ce3310a5b05263c9bb6183f8733657747a4c1"
	I1009 19:46:22.108674   46924 cri.go:89] found id: "dda813262aeb7c542d6c02c55697b41375e79273caf8095406901f40a1563fec"
	I1009 19:46:22.108678   46924 cri.go:89] found id: "be275566ef7e4f33148de7fac7e82f5811095be11b8d5a2501f04768feefc372"
	I1009 19:46:22.108681   46924 cri.go:89] found id: ""
	I1009 19:46:22.108723   46924 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-707643 -n multinode-707643
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-707643 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 stop
E1009 19:49:51.615005   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:49:51.908736   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-707643 stop: exit status 82 (2m0.470319276s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-707643-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-707643 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-707643 status: (18.706879826s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr: (3.360784772s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-707643 -n multinode-707643
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-707643 logs -n 25: (2.052016233s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:41 UTC | 09 Oct 24 19:41 UTC |
	|         | multinode-707643-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m02:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:41 UTC | 09 Oct 24 19:41 UTC |
	|         | multinode-707643:/home/docker/cp-test_multinode-707643-m02_multinode-707643.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:41 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n multinode-707643 sudo cat                                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /home/docker/cp-test_multinode-707643-m02_multinode-707643.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m02:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03:/home/docker/cp-test_multinode-707643-m02_multinode-707643-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n multinode-707643-m03 sudo cat                                   | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /home/docker/cp-test_multinode-707643-m02_multinode-707643-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp testdata/cp-test.txt                                                | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3630358187/001/cp-test_multinode-707643-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643:/home/docker/cp-test_multinode-707643-m03_multinode-707643.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n multinode-707643 sudo cat                                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /home/docker/cp-test_multinode-707643-m03_multinode-707643.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt                       | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m02:/home/docker/cp-test_multinode-707643-m03_multinode-707643-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n                                                                 | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | multinode-707643-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-707643 ssh -n multinode-707643-m02 sudo cat                                   | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | /home/docker/cp-test_multinode-707643-m03_multinode-707643-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-707643 node stop m03                                                          | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	| node    | multinode-707643 node start                                                             | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC | 09 Oct 24 19:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-707643                                                                | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC |                     |
	| stop    | -p multinode-707643                                                                     | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:42 UTC |                     |
	| start   | -p multinode-707643                                                                     | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:44 UTC | 09 Oct 24 19:48 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-707643                                                                | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:48 UTC |                     |
	| node    | multinode-707643 node delete                                                            | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:48 UTC | 09 Oct 24 19:48 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-707643 stop                                                                   | multinode-707643 | jenkins | v1.34.0 | 09 Oct 24 19:48 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 19:44:47
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 19:44:47.927416   46924 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:44:47.927551   46924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:47.927561   46924 out.go:358] Setting ErrFile to fd 2...
	I1009 19:44:47.927567   46924 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:44:47.927727   46924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:44:47.928249   46924 out.go:352] Setting JSON to false
	I1009 19:44:47.929078   46924 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5229,"bootTime":1728497859,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:44:47.929166   46924 start.go:139] virtualization: kvm guest
	I1009 19:44:47.931571   46924 out.go:177] * [multinode-707643] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:44:47.932944   46924 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:44:47.932947   46924 notify.go:220] Checking for updates...
	I1009 19:44:47.935531   46924 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:44:47.936771   46924 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:44:47.938037   46924 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:44:47.939072   46924 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:44:47.940170   46924 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:44:47.941554   46924 config.go:182] Loaded profile config "multinode-707643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:44:47.941646   46924 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:44:47.942059   46924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:44:47.942106   46924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:44:47.956793   46924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I1009 19:44:47.957165   46924 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:44:47.957653   46924 main.go:141] libmachine: Using API Version  1
	I1009 19:44:47.957673   46924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:44:47.958029   46924 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:44:47.958241   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:44:47.993605   46924 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 19:44:47.994761   46924 start.go:297] selected driver: kvm2
	I1009 19:44:47.994772   46924 start.go:901] validating driver "kvm2" against &{Name:multinode-707643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-707643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.236 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false insp
ektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:47.994899   46924 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:44:47.995213   46924 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:47.995278   46924 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 19:44:48.009668   46924 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 19:44:48.010381   46924 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 19:44:48.010422   46924 cni.go:84] Creating CNI manager for ""
	I1009 19:44:48.010489   46924 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1009 19:44:48.010570   46924 start.go:340] cluster config:
	{Name:multinode-707643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-707643 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.236 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow
:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:44:48.010711   46924 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 19:44:48.012554   46924 out.go:177] * Starting "multinode-707643" primary control-plane node in "multinode-707643" cluster
	I1009 19:44:48.013726   46924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:44:48.013760   46924 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 19:44:48.013768   46924 cache.go:56] Caching tarball of preloaded images
	I1009 19:44:48.013842   46924 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 19:44:48.013853   46924 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 19:44:48.013946   46924 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/config.json ...
	I1009 19:44:48.014123   46924 start.go:360] acquireMachinesLock for multinode-707643: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 19:44:48.014173   46924 start.go:364] duration metric: took 35.182µs to acquireMachinesLock for "multinode-707643"
	I1009 19:44:48.014186   46924 start.go:96] Skipping create...Using existing machine configuration
	I1009 19:44:48.014192   46924 fix.go:54] fixHost starting: 
	I1009 19:44:48.014427   46924 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:44:48.014456   46924 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:44:48.027924   46924 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I1009 19:44:48.028369   46924 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:44:48.028858   46924 main.go:141] libmachine: Using API Version  1
	I1009 19:44:48.028875   46924 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:44:48.029208   46924 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:44:48.029436   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:44:48.029649   46924 main.go:141] libmachine: (multinode-707643) Calling .GetState
	I1009 19:44:48.031175   46924 fix.go:112] recreateIfNeeded on multinode-707643: state=Running err=<nil>
	W1009 19:44:48.031221   46924 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 19:44:48.033074   46924 out.go:177] * Updating the running kvm2 "multinode-707643" VM ...
	I1009 19:44:48.034292   46924 machine.go:93] provisionDockerMachine start ...
	I1009 19:44:48.034311   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:44:48.034481   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.036877   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.037299   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.037324   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.037452   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.037634   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.037760   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.037906   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.038047   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:48.038264   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:44:48.038282   46924 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 19:44:48.144056   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-707643
	
	I1009 19:44:48.144079   46924 main.go:141] libmachine: (multinode-707643) Calling .GetMachineName
	I1009 19:44:48.144288   46924 buildroot.go:166] provisioning hostname "multinode-707643"
	I1009 19:44:48.144307   46924 main.go:141] libmachine: (multinode-707643) Calling .GetMachineName
	I1009 19:44:48.144495   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.146973   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.147401   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.147425   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.147567   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.147715   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.147867   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.147960   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.148107   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:48.148280   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:44:48.148297   46924 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-707643 && echo "multinode-707643" | sudo tee /etc/hostname
	I1009 19:44:48.270817   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-707643
	
	I1009 19:44:48.270847   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.273513   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.273889   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.273918   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.274041   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.274234   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.274394   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.274525   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.274692   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:48.274957   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:44:48.274984   46924 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-707643' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-707643/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-707643' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 19:44:48.380523   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 19:44:48.380554   46924 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 19:44:48.380576   46924 buildroot.go:174] setting up certificates
	I1009 19:44:48.380591   46924 provision.go:84] configureAuth start
	I1009 19:44:48.380605   46924 main.go:141] libmachine: (multinode-707643) Calling .GetMachineName
	I1009 19:44:48.380853   46924 main.go:141] libmachine: (multinode-707643) Calling .GetIP
	I1009 19:44:48.383484   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.383808   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.383848   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.384009   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.386047   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.386379   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.386413   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.386536   46924 provision.go:143] copyHostCerts
	I1009 19:44:48.386564   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:44:48.386605   46924 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 19:44:48.386613   46924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 19:44:48.386676   46924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 19:44:48.386771   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:44:48.386789   46924 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 19:44:48.386795   46924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 19:44:48.386820   46924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 19:44:48.386874   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:44:48.386892   46924 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 19:44:48.386903   46924 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 19:44:48.386927   46924 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 19:44:48.386972   46924 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.multinode-707643 san=[127.0.0.1 192.168.39.10 localhost minikube multinode-707643]
	I1009 19:44:48.527341   46924 provision.go:177] copyRemoteCerts
	I1009 19:44:48.527401   46924 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 19:44:48.527427   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.530247   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.530577   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.530601   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.530816   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.530981   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.531137   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.531260   46924 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:44:48.614173   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 19:44:48.614251   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 19:44:48.639653   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 19:44:48.639725   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 19:44:48.664955   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 19:44:48.665015   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1009 19:44:48.689412   46924 provision.go:87] duration metric: took 308.809441ms to configureAuth
	I1009 19:44:48.689438   46924 buildroot.go:189] setting minikube options for container-runtime
	I1009 19:44:48.689712   46924 config.go:182] Loaded profile config "multinode-707643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:44:48.689799   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:44:48.692823   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.693139   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:44:48.693165   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:44:48.693342   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:44:48.693504   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.693638   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:44:48.693769   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:44:48.693966   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:44:48.694119   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:44:48.694136   46924 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 19:46:19.527450   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 19:46:19.527478   46924 machine.go:96] duration metric: took 1m31.493172909s to provisionDockerMachine
	I1009 19:46:19.527492   46924 start.go:293] postStartSetup for "multinode-707643" (driver="kvm2")
	I1009 19:46:19.527507   46924 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 19:46:19.527542   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.527821   46924 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 19:46:19.527851   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:46:19.530839   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.531199   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.531224   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.531335   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:46:19.531474   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.531580   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:46:19.531698   46924 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:46:19.614558   46924 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 19:46:19.618701   46924 command_runner.go:130] > NAME=Buildroot
	I1009 19:46:19.618719   46924 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1009 19:46:19.618725   46924 command_runner.go:130] > ID=buildroot
	I1009 19:46:19.618732   46924 command_runner.go:130] > VERSION_ID=2023.02.9
	I1009 19:46:19.618741   46924 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1009 19:46:19.618785   46924 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 19:46:19.618806   46924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 19:46:19.618866   46924 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 19:46:19.618932   46924 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 19:46:19.618943   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /etc/ssl/certs/166072.pem
	I1009 19:46:19.619051   46924 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 19:46:19.628332   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:46:19.651875   46924 start.go:296] duration metric: took 124.371408ms for postStartSetup
	I1009 19:46:19.651909   46924 fix.go:56] duration metric: took 1m31.637715054s for fixHost
	I1009 19:46:19.651931   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:46:19.654439   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.654795   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.654818   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.654996   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:46:19.655173   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.655274   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.655353   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:46:19.655490   46924 main.go:141] libmachine: Using SSH client type: native
	I1009 19:46:19.655657   46924 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I1009 19:46:19.655668   46924 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 19:46:19.759660   46924 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728503179.735466149
	
	I1009 19:46:19.759680   46924 fix.go:216] guest clock: 1728503179.735466149
	I1009 19:46:19.759687   46924 fix.go:229] Guest: 2024-10-09 19:46:19.735466149 +0000 UTC Remote: 2024-10-09 19:46:19.651914828 +0000 UTC m=+91.759678640 (delta=83.551321ms)
	I1009 19:46:19.759704   46924 fix.go:200] guest clock delta is within tolerance: 83.551321ms
	I1009 19:46:19.759708   46924 start.go:83] releasing machines lock for "multinode-707643", held for 1m31.745527134s
	I1009 19:46:19.759725   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.759930   46924 main.go:141] libmachine: (multinode-707643) Calling .GetIP
	I1009 19:46:19.762045   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.762385   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.762411   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.762518   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.762962   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.763124   46924 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:46:19.763272   46924 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 19:46:19.763316   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:46:19.763375   46924 ssh_runner.go:195] Run: cat /version.json
	I1009 19:46:19.763401   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:46:19.765873   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.766043   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.766253   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.766276   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.766424   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:46:19.766490   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:19.766519   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:19.766573   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.766687   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:46:19.766709   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:46:19.766835   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:46:19.766870   46924 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:46:19.766999   46924 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:46:19.767148   46924 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:46:19.852744   46924 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1009 19:46:19.852860   46924 ssh_runner.go:195] Run: systemctl --version
	I1009 19:46:19.876154   46924 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 19:46:19.876816   46924 command_runner.go:130] > systemd 252 (252)
	I1009 19:46:19.876848   46924 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1009 19:46:19.876903   46924 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 19:46:20.050072   46924 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 19:46:20.061630   46924 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1009 19:46:20.061853   46924 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 19:46:20.061930   46924 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 19:46:20.071814   46924 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 19:46:20.071832   46924 start.go:495] detecting cgroup driver to use...
	I1009 19:46:20.071881   46924 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 19:46:20.089797   46924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 19:46:20.104434   46924 docker.go:217] disabling cri-docker service (if available) ...
	I1009 19:46:20.104517   46924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 19:46:20.119050   46924 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 19:46:20.133133   46924 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 19:46:20.285437   46924 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 19:46:20.427386   46924 docker.go:233] disabling docker service ...
	I1009 19:46:20.427475   46924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 19:46:20.443087   46924 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 19:46:20.456166   46924 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 19:46:20.589679   46924 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 19:46:20.730762   46924 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 19:46:20.744643   46924 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 19:46:20.763752   46924 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 19:46:20.763791   46924 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 19:46:20.763840   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.774225   46924 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 19:46:20.774272   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.784385   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.794215   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.804014   46924 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 19:46:20.814229   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.824143   46924 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.834740   46924 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 19:46:20.844320   46924 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 19:46:20.852963   46924 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 19:46:20.853005   46924 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 19:46:20.861573   46924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:46:21.000733   46924 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 19:46:21.196252   46924 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 19:46:21.196342   46924 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 19:46:21.201259   46924 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 19:46:21.201283   46924 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 19:46:21.201293   46924 command_runner.go:130] > Device: 0,22	Inode: 1291        Links: 1
	I1009 19:46:21.201302   46924 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:46:21.201316   46924 command_runner.go:130] > Access: 2024-10-09 19:46:21.069430899 +0000
	I1009 19:46:21.201328   46924 command_runner.go:130] > Modify: 2024-10-09 19:46:21.069430899 +0000
	I1009 19:46:21.201339   46924 command_runner.go:130] > Change: 2024-10-09 19:46:21.069430899 +0000
	I1009 19:46:21.201348   46924 command_runner.go:130] >  Birth: -
	I1009 19:46:21.201368   46924 start.go:563] Will wait 60s for crictl version
	I1009 19:46:21.201414   46924 ssh_runner.go:195] Run: which crictl
	I1009 19:46:21.204895   46924 command_runner.go:130] > /usr/bin/crictl
	I1009 19:46:21.205033   46924 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 19:46:21.246638   46924 command_runner.go:130] > Version:  0.1.0
	I1009 19:46:21.246662   46924 command_runner.go:130] > RuntimeName:  cri-o
	I1009 19:46:21.246669   46924 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1009 19:46:21.246676   46924 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 19:46:21.246692   46924 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 19:46:21.246750   46924 ssh_runner.go:195] Run: crio --version
	I1009 19:46:21.274766   46924 command_runner.go:130] > crio version 1.29.1
	I1009 19:46:21.274793   46924 command_runner.go:130] > Version:        1.29.1
	I1009 19:46:21.274799   46924 command_runner.go:130] > GitCommit:      unknown
	I1009 19:46:21.274803   46924 command_runner.go:130] > GitCommitDate:  unknown
	I1009 19:46:21.274807   46924 command_runner.go:130] > GitTreeState:   clean
	I1009 19:46:21.274812   46924 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1009 19:46:21.274817   46924 command_runner.go:130] > GoVersion:      go1.21.6
	I1009 19:46:21.274821   46924 command_runner.go:130] > Compiler:       gc
	I1009 19:46:21.274825   46924 command_runner.go:130] > Platform:       linux/amd64
	I1009 19:46:21.274829   46924 command_runner.go:130] > Linkmode:       dynamic
	I1009 19:46:21.274850   46924 command_runner.go:130] > BuildTags:      
	I1009 19:46:21.274854   46924 command_runner.go:130] >   containers_image_ostree_stub
	I1009 19:46:21.274858   46924 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1009 19:46:21.274863   46924 command_runner.go:130] >   btrfs_noversion
	I1009 19:46:21.274866   46924 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1009 19:46:21.274874   46924 command_runner.go:130] >   libdm_no_deferred_remove
	I1009 19:46:21.274877   46924 command_runner.go:130] >   seccomp
	I1009 19:46:21.274881   46924 command_runner.go:130] > LDFlags:          unknown
	I1009 19:46:21.274889   46924 command_runner.go:130] > SeccompEnabled:   true
	I1009 19:46:21.274898   46924 command_runner.go:130] > AppArmorEnabled:  false
	I1009 19:46:21.276026   46924 ssh_runner.go:195] Run: crio --version
	I1009 19:46:21.303825   46924 command_runner.go:130] > crio version 1.29.1
	I1009 19:46:21.303854   46924 command_runner.go:130] > Version:        1.29.1
	I1009 19:46:21.303863   46924 command_runner.go:130] > GitCommit:      unknown
	I1009 19:46:21.303869   46924 command_runner.go:130] > GitCommitDate:  unknown
	I1009 19:46:21.303876   46924 command_runner.go:130] > GitTreeState:   clean
	I1009 19:46:21.303888   46924 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1009 19:46:21.303896   46924 command_runner.go:130] > GoVersion:      go1.21.6
	I1009 19:46:21.303901   46924 command_runner.go:130] > Compiler:       gc
	I1009 19:46:21.303906   46924 command_runner.go:130] > Platform:       linux/amd64
	I1009 19:46:21.303910   46924 command_runner.go:130] > Linkmode:       dynamic
	I1009 19:46:21.303915   46924 command_runner.go:130] > BuildTags:      
	I1009 19:46:21.303923   46924 command_runner.go:130] >   containers_image_ostree_stub
	I1009 19:46:21.303927   46924 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1009 19:46:21.303936   46924 command_runner.go:130] >   btrfs_noversion
	I1009 19:46:21.303944   46924 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1009 19:46:21.303947   46924 command_runner.go:130] >   libdm_no_deferred_remove
	I1009 19:46:21.303951   46924 command_runner.go:130] >   seccomp
	I1009 19:46:21.303955   46924 command_runner.go:130] > LDFlags:          unknown
	I1009 19:46:21.303959   46924 command_runner.go:130] > SeccompEnabled:   true
	I1009 19:46:21.303963   46924 command_runner.go:130] > AppArmorEnabled:  false
	I1009 19:46:21.306103   46924 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 19:46:21.307363   46924 main.go:141] libmachine: (multinode-707643) Calling .GetIP
	I1009 19:46:21.310183   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:21.310542   46924 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:46:21.310568   46924 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:46:21.310773   46924 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 19:46:21.314803   46924 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1009 19:46:21.315015   46924 kubeadm.go:883] updating cluster {Name:multinode-707643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-707643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.236 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 19:46:21.315168   46924 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 19:46:21.315215   46924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:46:21.358094   46924 command_runner.go:130] > {
	I1009 19:46:21.358113   46924 command_runner.go:130] >   "images": [
	I1009 19:46:21.358117   46924 command_runner.go:130] >     {
	I1009 19:46:21.358131   46924 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1009 19:46:21.358136   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358141   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1009 19:46:21.358145   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358148   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358156   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1009 19:46:21.358163   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1009 19:46:21.358166   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358171   46924 command_runner.go:130] >       "size": "87190579",
	I1009 19:46:21.358174   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358178   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358182   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358186   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358190   46924 command_runner.go:130] >     },
	I1009 19:46:21.358198   46924 command_runner.go:130] >     {
	I1009 19:46:21.358203   46924 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1009 19:46:21.358214   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358219   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1009 19:46:21.358222   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358226   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358233   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1009 19:46:21.358240   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1009 19:46:21.358244   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358248   46924 command_runner.go:130] >       "size": "94965812",
	I1009 19:46:21.358254   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358262   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358276   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358280   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358283   46924 command_runner.go:130] >     },
	I1009 19:46:21.358286   46924 command_runner.go:130] >     {
	I1009 19:46:21.358291   46924 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1009 19:46:21.358296   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358303   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1009 19:46:21.358315   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358323   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358333   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1009 19:46:21.358346   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1009 19:46:21.358350   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358354   46924 command_runner.go:130] >       "size": "1363676",
	I1009 19:46:21.358358   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358362   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358366   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358370   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358375   46924 command_runner.go:130] >     },
	I1009 19:46:21.358383   46924 command_runner.go:130] >     {
	I1009 19:46:21.358391   46924 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:46:21.358400   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358408   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:46:21.358415   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358420   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358427   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:46:21.358450   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:46:21.358460   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358466   46924 command_runner.go:130] >       "size": "31470524",
	I1009 19:46:21.358475   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358481   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358490   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358496   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358504   46924 command_runner.go:130] >     },
	I1009 19:46:21.358509   46924 command_runner.go:130] >     {
	I1009 19:46:21.358520   46924 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1009 19:46:21.358528   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358536   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1009 19:46:21.358546   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358553   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358567   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1009 19:46:21.358590   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1009 19:46:21.358599   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358612   46924 command_runner.go:130] >       "size": "63273227",
	I1009 19:46:21.358621   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.358627   46924 command_runner.go:130] >       "username": "nonroot",
	I1009 19:46:21.358633   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358637   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358643   46924 command_runner.go:130] >     },
	I1009 19:46:21.358646   46924 command_runner.go:130] >     {
	I1009 19:46:21.358655   46924 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1009 19:46:21.358665   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358673   46924 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1009 19:46:21.358681   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358688   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358701   46924 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1009 19:46:21.358715   46924 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1009 19:46:21.358723   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358729   46924 command_runner.go:130] >       "size": "149009664",
	I1009 19:46:21.358735   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.358741   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.358749   46924 command_runner.go:130] >       },
	I1009 19:46:21.358756   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358763   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358770   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358778   46924 command_runner.go:130] >     },
	I1009 19:46:21.358783   46924 command_runner.go:130] >     {
	I1009 19:46:21.358795   46924 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1009 19:46:21.358804   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358810   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1009 19:46:21.358814   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358818   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.358832   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1009 19:46:21.358846   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1009 19:46:21.358860   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358869   46924 command_runner.go:130] >       "size": "95237600",
	I1009 19:46:21.358878   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.358886   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.358892   46924 command_runner.go:130] >       },
	I1009 19:46:21.358899   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.358903   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.358912   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.358920   46924 command_runner.go:130] >     },
	I1009 19:46:21.358928   46924 command_runner.go:130] >     {
	I1009 19:46:21.358938   46924 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1009 19:46:21.358947   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.358958   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1009 19:46:21.358967   46924 command_runner.go:130] >       ],
	I1009 19:46:21.358974   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.359000   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1009 19:46:21.359017   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1009 19:46:21.359022   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359028   46924 command_runner.go:130] >       "size": "89437508",
	I1009 19:46:21.359036   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.359042   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.359050   46924 command_runner.go:130] >       },
	I1009 19:46:21.359057   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.359075   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.359082   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.359088   46924 command_runner.go:130] >     },
	I1009 19:46:21.359093   46924 command_runner.go:130] >     {
	I1009 19:46:21.359103   46924 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1009 19:46:21.359108   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.359116   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1009 19:46:21.359121   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359128   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.359140   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1009 19:46:21.359154   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1009 19:46:21.359160   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359166   46924 command_runner.go:130] >       "size": "92733849",
	I1009 19:46:21.359172   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.359178   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.359184   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.359190   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.359196   46924 command_runner.go:130] >     },
	I1009 19:46:21.359200   46924 command_runner.go:130] >     {
	I1009 19:46:21.359223   46924 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1009 19:46:21.359231   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.359236   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1009 19:46:21.359239   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359250   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.359266   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1009 19:46:21.359280   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1009 19:46:21.359288   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359296   46924 command_runner.go:130] >       "size": "68420934",
	I1009 19:46:21.359305   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.359314   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.359320   46924 command_runner.go:130] >       },
	I1009 19:46:21.359325   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.359333   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.359340   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.359348   46924 command_runner.go:130] >     },
	I1009 19:46:21.359359   46924 command_runner.go:130] >     {
	I1009 19:46:21.359370   46924 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1009 19:46:21.359379   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.359388   46924 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1009 19:46:21.359396   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359402   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.359411   46924 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1009 19:46:21.359424   46924 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1009 19:46:21.359438   46924 command_runner.go:130] >       ],
	I1009 19:46:21.359447   46924 command_runner.go:130] >       "size": "742080",
	I1009 19:46:21.359456   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.359466   46924 command_runner.go:130] >         "value": "65535"
	I1009 19:46:21.359473   46924 command_runner.go:130] >       },
	I1009 19:46:21.359479   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.359485   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.359492   46924 command_runner.go:130] >       "pinned": true
	I1009 19:46:21.359495   46924 command_runner.go:130] >     }
	I1009 19:46:21.359502   46924 command_runner.go:130] >   ]
	I1009 19:46:21.359507   46924 command_runner.go:130] > }
	I1009 19:46:21.359744   46924 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:46:21.359757   46924 crio.go:433] Images already preloaded, skipping extraction
	I1009 19:46:21.359812   46924 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 19:46:21.393381   46924 command_runner.go:130] > {
	I1009 19:46:21.393409   46924 command_runner.go:130] >   "images": [
	I1009 19:46:21.393414   46924 command_runner.go:130] >     {
	I1009 19:46:21.393423   46924 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1009 19:46:21.393430   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.393439   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1009 19:46:21.393445   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393451   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.393467   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1009 19:46:21.393478   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1009 19:46:21.393482   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393486   46924 command_runner.go:130] >       "size": "87190579",
	I1009 19:46:21.393492   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.393496   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.393503   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.393511   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.393517   46924 command_runner.go:130] >     },
	I1009 19:46:21.393525   46924 command_runner.go:130] >     {
	I1009 19:46:21.393536   46924 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1009 19:46:21.393546   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.393683   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1009 19:46:21.393696   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393704   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.393715   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1009 19:46:21.393726   46924 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1009 19:46:21.393735   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393741   46924 command_runner.go:130] >       "size": "94965812",
	I1009 19:46:21.393750   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.393769   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.393779   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.393788   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.393796   46924 command_runner.go:130] >     },
	I1009 19:46:21.393802   46924 command_runner.go:130] >     {
	I1009 19:46:21.393814   46924 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1009 19:46:21.393829   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.393840   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1009 19:46:21.393847   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393852   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.393867   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1009 19:46:21.393882   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1009 19:46:21.393890   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393900   46924 command_runner.go:130] >       "size": "1363676",
	I1009 19:46:21.393909   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.393918   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.393926   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.393932   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.393936   46924 command_runner.go:130] >     },
	I1009 19:46:21.393945   46924 command_runner.go:130] >     {
	I1009 19:46:21.393958   46924 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1009 19:46:21.393968   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.393979   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 19:46:21.393990   46924 command_runner.go:130] >       ],
	I1009 19:46:21.393999   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394012   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1009 19:46:21.394032   46924 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1009 19:46:21.394041   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394048   46924 command_runner.go:130] >       "size": "31470524",
	I1009 19:46:21.394057   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.394063   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394072   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394079   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394086   46924 command_runner.go:130] >     },
	I1009 19:46:21.394092   46924 command_runner.go:130] >     {
	I1009 19:46:21.394101   46924 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1009 19:46:21.394105   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394112   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1009 19:46:21.394119   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394133   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394147   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1009 19:46:21.394161   46924 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1009 19:46:21.394170   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394177   46924 command_runner.go:130] >       "size": "63273227",
	I1009 19:46:21.394184   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.394188   46924 command_runner.go:130] >       "username": "nonroot",
	I1009 19:46:21.394195   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394201   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394214   46924 command_runner.go:130] >     },
	I1009 19:46:21.394222   46924 command_runner.go:130] >     {
	I1009 19:46:21.394234   46924 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1009 19:46:21.394243   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394251   46924 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1009 19:46:21.394259   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394266   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394275   46924 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1009 19:46:21.394287   46924 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1009 19:46:21.394296   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394306   46924 command_runner.go:130] >       "size": "149009664",
	I1009 19:46:21.394314   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.394324   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.394334   46924 command_runner.go:130] >       },
	I1009 19:46:21.394343   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394350   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394357   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394361   46924 command_runner.go:130] >     },
	I1009 19:46:21.394367   46924 command_runner.go:130] >     {
	I1009 19:46:21.394377   46924 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1009 19:46:21.394385   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394397   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1009 19:46:21.394405   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394414   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394436   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1009 19:46:21.394446   46924 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1009 19:46:21.394450   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394458   46924 command_runner.go:130] >       "size": "95237600",
	I1009 19:46:21.394468   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.394477   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.394485   46924 command_runner.go:130] >       },
	I1009 19:46:21.394492   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394501   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394509   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394517   46924 command_runner.go:130] >     },
	I1009 19:46:21.394523   46924 command_runner.go:130] >     {
	I1009 19:46:21.394532   46924 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1009 19:46:21.394536   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394547   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1009 19:46:21.394555   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394564   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394594   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1009 19:46:21.394609   46924 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1009 19:46:21.394613   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394619   46924 command_runner.go:130] >       "size": "89437508",
	I1009 19:46:21.394623   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.394631   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.394640   46924 command_runner.go:130] >       },
	I1009 19:46:21.394650   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394659   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394668   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394676   46924 command_runner.go:130] >     },
	I1009 19:46:21.394682   46924 command_runner.go:130] >     {
	I1009 19:46:21.394694   46924 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1009 19:46:21.394701   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394706   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1009 19:46:21.394713   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394725   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394740   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1009 19:46:21.394754   46924 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1009 19:46:21.394764   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394774   46924 command_runner.go:130] >       "size": "92733849",
	I1009 19:46:21.394780   46924 command_runner.go:130] >       "uid": null,
	I1009 19:46:21.394787   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394791   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394795   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394803   46924 command_runner.go:130] >     },
	I1009 19:46:21.394812   46924 command_runner.go:130] >     {
	I1009 19:46:21.394825   46924 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1009 19:46:21.394834   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394848   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1009 19:46:21.394856   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394863   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.394874   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1009 19:46:21.394887   46924 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1009 19:46:21.394896   46924 command_runner.go:130] >       ],
	I1009 19:46:21.394905   46924 command_runner.go:130] >       "size": "68420934",
	I1009 19:46:21.394914   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.394923   46924 command_runner.go:130] >         "value": "0"
	I1009 19:46:21.394932   46924 command_runner.go:130] >       },
	I1009 19:46:21.394940   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.394947   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.394955   46924 command_runner.go:130] >       "pinned": false
	I1009 19:46:21.394959   46924 command_runner.go:130] >     },
	I1009 19:46:21.394962   46924 command_runner.go:130] >     {
	I1009 19:46:21.394973   46924 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1009 19:46:21.394981   46924 command_runner.go:130] >       "repoTags": [
	I1009 19:46:21.394993   46924 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1009 19:46:21.395001   46924 command_runner.go:130] >       ],
	I1009 19:46:21.395010   46924 command_runner.go:130] >       "repoDigests": [
	I1009 19:46:21.395031   46924 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1009 19:46:21.395043   46924 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1009 19:46:21.395049   46924 command_runner.go:130] >       ],
	I1009 19:46:21.395055   46924 command_runner.go:130] >       "size": "742080",
	I1009 19:46:21.395077   46924 command_runner.go:130] >       "uid": {
	I1009 19:46:21.395084   46924 command_runner.go:130] >         "value": "65535"
	I1009 19:46:21.395091   46924 command_runner.go:130] >       },
	I1009 19:46:21.395097   46924 command_runner.go:130] >       "username": "",
	I1009 19:46:21.395105   46924 command_runner.go:130] >       "spec": null,
	I1009 19:46:21.395112   46924 command_runner.go:130] >       "pinned": true
	I1009 19:46:21.395119   46924 command_runner.go:130] >     }
	I1009 19:46:21.395124   46924 command_runner.go:130] >   ]
	I1009 19:46:21.395137   46924 command_runner.go:130] > }
	I1009 19:46:21.395311   46924 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 19:46:21.395324   46924 cache_images.go:84] Images are preloaded, skipping loading
	I1009 19:46:21.395333   46924 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.31.1 crio true true} ...
	I1009 19:46:21.395441   46924 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-707643 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-707643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 19:46:21.395527   46924 ssh_runner.go:195] Run: crio config
	I1009 19:46:21.427794   46924 command_runner.go:130] ! time="2024-10-09 19:46:21.403713141Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1009 19:46:21.433025   46924 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 19:46:21.439362   46924 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 19:46:21.439383   46924 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 19:46:21.439389   46924 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 19:46:21.439393   46924 command_runner.go:130] > #
	I1009 19:46:21.439402   46924 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 19:46:21.439410   46924 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 19:46:21.439420   46924 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 19:46:21.439433   46924 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 19:46:21.439443   46924 command_runner.go:130] > # reload'.
	I1009 19:46:21.439454   46924 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 19:46:21.439465   46924 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 19:46:21.439478   46924 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 19:46:21.439487   46924 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 19:46:21.439494   46924 command_runner.go:130] > [crio]
	I1009 19:46:21.439504   46924 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 19:46:21.439515   46924 command_runner.go:130] > # containers images, in this directory.
	I1009 19:46:21.439523   46924 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1009 19:46:21.439540   46924 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 19:46:21.439550   46924 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1009 19:46:21.439563   46924 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1009 19:46:21.439573   46924 command_runner.go:130] > # imagestore = ""
	I1009 19:46:21.439583   46924 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 19:46:21.439595   46924 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 19:46:21.439606   46924 command_runner.go:130] > storage_driver = "overlay"
	I1009 19:46:21.439614   46924 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 19:46:21.439626   46924 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 19:46:21.439639   46924 command_runner.go:130] > storage_option = [
	I1009 19:46:21.439659   46924 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1009 19:46:21.439670   46924 command_runner.go:130] > ]
	I1009 19:46:21.439679   46924 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 19:46:21.439690   46924 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 19:46:21.439702   46924 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 19:46:21.439714   46924 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 19:46:21.439727   46924 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 19:46:21.439736   46924 command_runner.go:130] > # always happen on a node reboot
	I1009 19:46:21.439747   46924 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 19:46:21.439761   46924 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 19:46:21.439769   46924 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 19:46:21.439779   46924 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 19:46:21.439790   46924 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1009 19:46:21.439802   46924 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 19:46:21.439817   46924 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 19:46:21.439826   46924 command_runner.go:130] > # internal_wipe = true
	I1009 19:46:21.439841   46924 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1009 19:46:21.439852   46924 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1009 19:46:21.439860   46924 command_runner.go:130] > # internal_repair = false
	I1009 19:46:21.439865   46924 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 19:46:21.439878   46924 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 19:46:21.439889   46924 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 19:46:21.439900   46924 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 19:46:21.439912   46924 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 19:46:21.439921   46924 command_runner.go:130] > [crio.api]
	I1009 19:46:21.439932   46924 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 19:46:21.439942   46924 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 19:46:21.439953   46924 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 19:46:21.439960   46924 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 19:46:21.439969   46924 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 19:46:21.439980   46924 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 19:46:21.439989   46924 command_runner.go:130] > # stream_port = "0"
	I1009 19:46:21.440000   46924 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 19:46:21.440011   46924 command_runner.go:130] > # stream_enable_tls = false
	I1009 19:46:21.440022   46924 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 19:46:21.440031   46924 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 19:46:21.440040   46924 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 19:46:21.440054   46924 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1009 19:46:21.440063   46924 command_runner.go:130] > # minutes.
	I1009 19:46:21.440069   46924 command_runner.go:130] > # stream_tls_cert = ""
	I1009 19:46:21.440082   46924 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 19:46:21.440095   46924 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1009 19:46:21.440104   46924 command_runner.go:130] > # stream_tls_key = ""
	I1009 19:46:21.440113   46924 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 19:46:21.440125   46924 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 19:46:21.440142   46924 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1009 19:46:21.440151   46924 command_runner.go:130] > # stream_tls_ca = ""
	I1009 19:46:21.440165   46924 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:46:21.440174   46924 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1009 19:46:21.440188   46924 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1009 19:46:21.440198   46924 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1009 19:46:21.440210   46924 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 19:46:21.440218   46924 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 19:46:21.440226   46924 command_runner.go:130] > [crio.runtime]
	I1009 19:46:21.440234   46924 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 19:46:21.440245   46924 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 19:46:21.440252   46924 command_runner.go:130] > # "nofile=1024:2048"
	I1009 19:46:21.440267   46924 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 19:46:21.440277   46924 command_runner.go:130] > # default_ulimits = [
	I1009 19:46:21.440283   46924 command_runner.go:130] > # ]
	I1009 19:46:21.440294   46924 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 19:46:21.440303   46924 command_runner.go:130] > # no_pivot = false
	I1009 19:46:21.440312   46924 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 19:46:21.440322   46924 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 19:46:21.440332   46924 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 19:46:21.440343   46924 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 19:46:21.440356   46924 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 19:46:21.440368   46924 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:46:21.440379   46924 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1009 19:46:21.440388   46924 command_runner.go:130] > # Cgroup setting for conmon
	I1009 19:46:21.440401   46924 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 19:46:21.440409   46924 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 19:46:21.440415   46924 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 19:46:21.440425   46924 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 19:46:21.440441   46924 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 19:46:21.440450   46924 command_runner.go:130] > conmon_env = [
	I1009 19:46:21.440462   46924 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1009 19:46:21.440469   46924 command_runner.go:130] > ]
	I1009 19:46:21.440478   46924 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 19:46:21.440489   46924 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 19:46:21.440499   46924 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 19:46:21.440506   46924 command_runner.go:130] > # default_env = [
	I1009 19:46:21.440512   46924 command_runner.go:130] > # ]
	I1009 19:46:21.440524   46924 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 19:46:21.440538   46924 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1009 19:46:21.440548   46924 command_runner.go:130] > # selinux = false
	I1009 19:46:21.440560   46924 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 19:46:21.440572   46924 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1009 19:46:21.440584   46924 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1009 19:46:21.440593   46924 command_runner.go:130] > # seccomp_profile = ""
	I1009 19:46:21.440602   46924 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1009 19:46:21.440611   46924 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1009 19:46:21.440622   46924 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1009 19:46:21.440632   46924 command_runner.go:130] > # which might increase security.
	I1009 19:46:21.440639   46924 command_runner.go:130] > # This option is currently deprecated,
	I1009 19:46:21.440652   46924 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1009 19:46:21.440662   46924 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1009 19:46:21.440675   46924 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 19:46:21.440687   46924 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 19:46:21.440699   46924 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 19:46:21.440709   46924 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 19:46:21.440719   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.440729   46924 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 19:46:21.440737   46924 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 19:46:21.440748   46924 command_runner.go:130] > # the cgroup blockio controller.
	I1009 19:46:21.440757   46924 command_runner.go:130] > # blockio_config_file = ""
	I1009 19:46:21.440770   46924 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1009 19:46:21.440779   46924 command_runner.go:130] > # blockio parameters.
	I1009 19:46:21.440788   46924 command_runner.go:130] > # blockio_reload = false
	I1009 19:46:21.440798   46924 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 19:46:21.440804   46924 command_runner.go:130] > # irqbalance daemon.
	I1009 19:46:21.440812   46924 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 19:46:21.440827   46924 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1009 19:46:21.440841   46924 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1009 19:46:21.440853   46924 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1009 19:46:21.440866   46924 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1009 19:46:21.440878   46924 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 19:46:21.440886   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.440892   46924 command_runner.go:130] > # rdt_config_file = ""
	I1009 19:46:21.440900   46924 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 19:46:21.440910   46924 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1009 19:46:21.440937   46924 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 19:46:21.440946   46924 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 19:46:21.440959   46924 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 19:46:21.440971   46924 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 19:46:21.440978   46924 command_runner.go:130] > # will be added.
	I1009 19:46:21.440982   46924 command_runner.go:130] > # default_capabilities = [
	I1009 19:46:21.440989   46924 command_runner.go:130] > # 	"CHOWN",
	I1009 19:46:21.440994   46924 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 19:46:21.441003   46924 command_runner.go:130] > # 	"FSETID",
	I1009 19:46:21.441009   46924 command_runner.go:130] > # 	"FOWNER",
	I1009 19:46:21.441018   46924 command_runner.go:130] > # 	"SETGID",
	I1009 19:46:21.441027   46924 command_runner.go:130] > # 	"SETUID",
	I1009 19:46:21.441033   46924 command_runner.go:130] > # 	"SETPCAP",
	I1009 19:46:21.441040   46924 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 19:46:21.441048   46924 command_runner.go:130] > # 	"KILL",
	I1009 19:46:21.441053   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441067   46924 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 19:46:21.441076   46924 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 19:46:21.441082   46924 command_runner.go:130] > # add_inheritable_capabilities = false
	I1009 19:46:21.441094   46924 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 19:46:21.441105   46924 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:46:21.441112   46924 command_runner.go:130] > default_sysctls = [
	I1009 19:46:21.441122   46924 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1009 19:46:21.441127   46924 command_runner.go:130] > ]
	I1009 19:46:21.441137   46924 command_runner.go:130] > # List of devices on the host that a
	I1009 19:46:21.441149   46924 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 19:46:21.441158   46924 command_runner.go:130] > # allowed_devices = [
	I1009 19:46:21.441167   46924 command_runner.go:130] > # 	"/dev/fuse",
	I1009 19:46:21.441174   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441179   46924 command_runner.go:130] > # List of additional devices. specified as
	I1009 19:46:21.441192   46924 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 19:46:21.441204   46924 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 19:46:21.441219   46924 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 19:46:21.441228   46924 command_runner.go:130] > # additional_devices = [
	I1009 19:46:21.441236   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441244   46924 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 19:46:21.441253   46924 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 19:46:21.441265   46924 command_runner.go:130] > # 	"/etc/cdi",
	I1009 19:46:21.441272   46924 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 19:46:21.441276   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441285   46924 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 19:46:21.441298   46924 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 19:46:21.441306   46924 command_runner.go:130] > # Defaults to false.
	I1009 19:46:21.441317   46924 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 19:46:21.441332   46924 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 19:46:21.441344   46924 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 19:46:21.441352   46924 command_runner.go:130] > # hooks_dir = [
	I1009 19:46:21.441360   46924 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 19:46:21.441363   46924 command_runner.go:130] > # ]
	I1009 19:46:21.441375   46924 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 19:46:21.441387   46924 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 19:46:21.441399   46924 command_runner.go:130] > # its default mounts from the following two files:
	I1009 19:46:21.441407   46924 command_runner.go:130] > #
	I1009 19:46:21.441419   46924 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 19:46:21.441432   46924 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 19:46:21.441443   46924 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 19:46:21.441450   46924 command_runner.go:130] > #
	I1009 19:46:21.441456   46924 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 19:46:21.441468   46924 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 19:46:21.441480   46924 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 19:46:21.441491   46924 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 19:46:21.441499   46924 command_runner.go:130] > #
	I1009 19:46:21.441509   46924 command_runner.go:130] > # default_mounts_file = ""
	I1009 19:46:21.441521   46924 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 19:46:21.441534   46924 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 19:46:21.441542   46924 command_runner.go:130] > pids_limit = 1024
	I1009 19:46:21.441550   46924 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 19:46:21.441561   46924 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 19:46:21.441574   46924 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 19:46:21.441589   46924 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 19:46:21.441598   46924 command_runner.go:130] > # log_size_max = -1
	I1009 19:46:21.441611   46924 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1009 19:46:21.441625   46924 command_runner.go:130] > # log_to_journald = false
	I1009 19:46:21.441634   46924 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 19:46:21.441643   46924 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 19:46:21.441654   46924 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 19:46:21.441665   46924 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 19:46:21.441675   46924 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 19:46:21.441684   46924 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 19:46:21.441696   46924 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 19:46:21.441705   46924 command_runner.go:130] > # read_only = false
	I1009 19:46:21.441717   46924 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 19:46:21.441729   46924 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 19:46:21.441735   46924 command_runner.go:130] > # live configuration reload.
	I1009 19:46:21.441739   46924 command_runner.go:130] > # log_level = "info"
	I1009 19:46:21.441751   46924 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 19:46:21.441762   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.441772   46924 command_runner.go:130] > # log_filter = ""
	I1009 19:46:21.441783   46924 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 19:46:21.441797   46924 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 19:46:21.441806   46924 command_runner.go:130] > # separated by comma.
	I1009 19:46:21.441820   46924 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:46:21.441835   46924 command_runner.go:130] > # uid_mappings = ""
	I1009 19:46:21.441849   46924 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 19:46:21.441862   46924 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 19:46:21.441871   46924 command_runner.go:130] > # separated by comma.
	I1009 19:46:21.441885   46924 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:46:21.441894   46924 command_runner.go:130] > # gid_mappings = ""
	I1009 19:46:21.441906   46924 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 19:46:21.441919   46924 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:46:21.441928   46924 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:46:21.441939   46924 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:46:21.441957   46924 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 19:46:21.441971   46924 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 19:46:21.441984   46924 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 19:46:21.441996   46924 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 19:46:21.442010   46924 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1009 19:46:21.442017   46924 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 19:46:21.442029   46924 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 19:46:21.442041   46924 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 19:46:21.442056   46924 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 19:46:21.442065   46924 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 19:46:21.442077   46924 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 19:46:21.442089   46924 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 19:46:21.442100   46924 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 19:46:21.442107   46924 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 19:46:21.442111   46924 command_runner.go:130] > drop_infra_ctr = false
	I1009 19:46:21.442119   46924 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 19:46:21.442131   46924 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 19:46:21.442143   46924 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 19:46:21.442152   46924 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 19:46:21.442163   46924 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1009 19:46:21.442179   46924 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1009 19:46:21.442189   46924 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1009 19:46:21.442197   46924 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1009 19:46:21.442202   46924 command_runner.go:130] > # shared_cpuset = ""
	I1009 19:46:21.442214   46924 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 19:46:21.442225   46924 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 19:46:21.442232   46924 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 19:46:21.442245   46924 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 19:46:21.442260   46924 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1009 19:46:21.442272   46924 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1009 19:46:21.442284   46924 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1009 19:46:21.442292   46924 command_runner.go:130] > # enable_criu_support = false
	I1009 19:46:21.442299   46924 command_runner.go:130] > # Enable/disable the generation of the container,
	I1009 19:46:21.442310   46924 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1009 19:46:21.442321   46924 command_runner.go:130] > # enable_pod_events = false
	I1009 19:46:21.442331   46924 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:46:21.442343   46924 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 19:46:21.442353   46924 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1009 19:46:21.442363   46924 command_runner.go:130] > # default_runtime = "runc"
	I1009 19:46:21.442373   46924 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 19:46:21.442387   46924 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 19:46:21.442401   46924 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1009 19:46:21.442415   46924 command_runner.go:130] > # creation as a file is not desired either.
	I1009 19:46:21.442430   46924 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 19:46:21.442442   46924 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 19:46:21.442452   46924 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 19:46:21.442459   46924 command_runner.go:130] > # ]
	I1009 19:46:21.442469   46924 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 19:46:21.442481   46924 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 19:46:21.442491   46924 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1009 19:46:21.442499   46924 command_runner.go:130] > # Each entry in the table should follow the format:
	I1009 19:46:21.442504   46924 command_runner.go:130] > #
	I1009 19:46:21.442514   46924 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1009 19:46:21.442524   46924 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1009 19:46:21.442577   46924 command_runner.go:130] > # runtime_type = "oci"
	I1009 19:46:21.442588   46924 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1009 19:46:21.442596   46924 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1009 19:46:21.442601   46924 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1009 19:46:21.442611   46924 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1009 19:46:21.442620   46924 command_runner.go:130] > # monitor_env = []
	I1009 19:46:21.442628   46924 command_runner.go:130] > # privileged_without_host_devices = false
	I1009 19:46:21.442638   46924 command_runner.go:130] > # allowed_annotations = []
	I1009 19:46:21.442649   46924 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1009 19:46:21.442657   46924 command_runner.go:130] > # Where:
	I1009 19:46:21.442669   46924 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1009 19:46:21.442681   46924 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1009 19:46:21.442691   46924 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 19:46:21.442700   46924 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 19:46:21.442709   46924 command_runner.go:130] > #   in $PATH.
	I1009 19:46:21.442722   46924 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1009 19:46:21.442733   46924 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 19:46:21.442746   46924 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1009 19:46:21.442754   46924 command_runner.go:130] > #   state.
	I1009 19:46:21.442765   46924 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 19:46:21.442778   46924 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 19:46:21.442788   46924 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 19:46:21.442796   46924 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 19:46:21.442809   46924 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 19:46:21.442822   46924 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 19:46:21.442837   46924 command_runner.go:130] > #   The currently recognized values are:
	I1009 19:46:21.442850   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 19:46:21.442864   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 19:46:21.442876   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 19:46:21.442886   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 19:46:21.442896   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 19:46:21.442909   46924 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 19:46:21.442922   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1009 19:46:21.442935   46924 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1009 19:46:21.442947   46924 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 19:46:21.442959   46924 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1009 19:46:21.442969   46924 command_runner.go:130] > #   deprecated option "conmon".
	I1009 19:46:21.442983   46924 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1009 19:46:21.442990   46924 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1009 19:46:21.442999   46924 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1009 19:46:21.443009   46924 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 19:46:21.443023   46924 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1009 19:46:21.443034   46924 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1009 19:46:21.443046   46924 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1009 19:46:21.443057   46924 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1009 19:46:21.443073   46924 command_runner.go:130] > #
	I1009 19:46:21.443083   46924 command_runner.go:130] > # Using the seccomp notifier feature:
	I1009 19:46:21.443092   46924 command_runner.go:130] > #
	I1009 19:46:21.443201   46924 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1009 19:46:21.443222   46924 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1009 19:46:21.443233   46924 command_runner.go:130] > #
	I1009 19:46:21.443247   46924 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1009 19:46:21.443261   46924 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1009 19:46:21.443274   46924 command_runner.go:130] > #
	I1009 19:46:21.443289   46924 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1009 19:46:21.443299   46924 command_runner.go:130] > # feature.
	I1009 19:46:21.443307   46924 command_runner.go:130] > #
	I1009 19:46:21.443317   46924 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1009 19:46:21.443331   46924 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1009 19:46:21.443346   46924 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1009 19:46:21.443368   46924 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1009 19:46:21.443421   46924 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1009 19:46:21.443437   46924 command_runner.go:130] > #
	I1009 19:46:21.443451   46924 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1009 19:46:21.443463   46924 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1009 19:46:21.443472   46924 command_runner.go:130] > #
	I1009 19:46:21.443485   46924 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1009 19:46:21.443499   46924 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1009 19:46:21.443507   46924 command_runner.go:130] > #
	I1009 19:46:21.443520   46924 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1009 19:46:21.443532   46924 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1009 19:46:21.443541   46924 command_runner.go:130] > # limitation.
	I1009 19:46:21.443554   46924 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 19:46:21.443564   46924 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1009 19:46:21.443574   46924 command_runner.go:130] > runtime_type = "oci"
	I1009 19:46:21.443584   46924 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 19:46:21.443593   46924 command_runner.go:130] > runtime_config_path = ""
	I1009 19:46:21.443604   46924 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1009 19:46:21.443613   46924 command_runner.go:130] > monitor_cgroup = "pod"
	I1009 19:46:21.443620   46924 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 19:46:21.443624   46924 command_runner.go:130] > monitor_env = [
	I1009 19:46:21.443636   46924 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1009 19:46:21.443645   46924 command_runner.go:130] > ]
	I1009 19:46:21.443653   46924 command_runner.go:130] > privileged_without_host_devices = false
	I1009 19:46:21.443667   46924 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 19:46:21.443678   46924 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 19:46:21.443695   46924 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 19:46:21.443709   46924 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 19:46:21.443719   46924 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1009 19:46:21.443731   46924 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 19:46:21.443749   46924 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 19:46:21.443765   46924 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 19:46:21.443777   46924 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 19:46:21.443787   46924 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 19:46:21.443793   46924 command_runner.go:130] > # Example:
	I1009 19:46:21.443800   46924 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 19:46:21.443806   46924 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 19:46:21.443810   46924 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 19:46:21.443825   46924 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 19:46:21.443831   46924 command_runner.go:130] > # cpuset = 0
	I1009 19:46:21.443838   46924 command_runner.go:130] > # cpushares = "0-1"
	I1009 19:46:21.443844   46924 command_runner.go:130] > # Where:
	I1009 19:46:21.443851   46924 command_runner.go:130] > # The workload name is workload-type.
	I1009 19:46:21.443861   46924 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 19:46:21.443870   46924 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 19:46:21.443879   46924 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 19:46:21.443890   46924 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 19:46:21.443895   46924 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 19:46:21.443900   46924 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1009 19:46:21.443910   46924 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1009 19:46:21.443916   46924 command_runner.go:130] > # Default value is set to true
	I1009 19:46:21.443924   46924 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1009 19:46:21.443933   46924 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1009 19:46:21.443940   46924 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1009 19:46:21.443947   46924 command_runner.go:130] > # Default value is set to 'false'
	I1009 19:46:21.443955   46924 command_runner.go:130] > # disable_hostport_mapping = false
	I1009 19:46:21.443968   46924 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 19:46:21.443975   46924 command_runner.go:130] > #
	I1009 19:46:21.443981   46924 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 19:46:21.443994   46924 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1009 19:46:21.444008   46924 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1009 19:46:21.444021   46924 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1009 19:46:21.444033   46924 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1009 19:46:21.444042   46924 command_runner.go:130] > [crio.image]
	I1009 19:46:21.444051   46924 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 19:46:21.444060   46924 command_runner.go:130] > # default_transport = "docker://"
	I1009 19:46:21.444071   46924 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 19:46:21.444079   46924 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:46:21.444085   46924 command_runner.go:130] > # global_auth_file = ""
	I1009 19:46:21.444096   46924 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 19:46:21.444104   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.444116   46924 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1009 19:46:21.444127   46924 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 19:46:21.444139   46924 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 19:46:21.444151   46924 command_runner.go:130] > # This option supports live configuration reload.
	I1009 19:46:21.444163   46924 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 19:46:21.444171   46924 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 19:46:21.444180   46924 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 19:46:21.444193   46924 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 19:46:21.444220   46924 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 19:46:21.444230   46924 command_runner.go:130] > # pause_command = "/pause"
	I1009 19:46:21.444242   46924 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1009 19:46:21.444254   46924 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1009 19:46:21.444263   46924 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1009 19:46:21.444278   46924 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1009 19:46:21.444291   46924 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1009 19:46:21.444304   46924 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1009 19:46:21.444314   46924 command_runner.go:130] > # pinned_images = [
	I1009 19:46:21.444319   46924 command_runner.go:130] > # ]
	I1009 19:46:21.444331   46924 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 19:46:21.444346   46924 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 19:46:21.444357   46924 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 19:46:21.444371   46924 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 19:46:21.444383   46924 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 19:46:21.444393   46924 command_runner.go:130] > # signature_policy = ""
	I1009 19:46:21.444406   46924 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1009 19:46:21.444419   46924 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1009 19:46:21.444431   46924 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1009 19:46:21.444444   46924 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1009 19:46:21.444454   46924 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1009 19:46:21.444463   46924 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1009 19:46:21.444476   46924 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 19:46:21.444489   46924 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 19:46:21.444498   46924 command_runner.go:130] > # changing them here.
	I1009 19:46:21.444508   46924 command_runner.go:130] > # insecure_registries = [
	I1009 19:46:21.444516   46924 command_runner.go:130] > # ]
	I1009 19:46:21.444526   46924 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 19:46:21.444534   46924 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 19:46:21.444538   46924 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 19:46:21.444545   46924 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 19:46:21.444555   46924 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 19:46:21.444571   46924 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 19:46:21.444580   46924 command_runner.go:130] > # CNI plugins.
	I1009 19:46:21.444589   46924 command_runner.go:130] > [crio.network]
	I1009 19:46:21.444602   46924 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 19:46:21.444616   46924 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 19:46:21.444625   46924 command_runner.go:130] > # cni_default_network = ""
	I1009 19:46:21.444635   46924 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 19:46:21.444641   46924 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 19:46:21.444650   46924 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 19:46:21.444659   46924 command_runner.go:130] > # plugin_dirs = [
	I1009 19:46:21.444669   46924 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 19:46:21.444677   46924 command_runner.go:130] > # ]
	I1009 19:46:21.444689   46924 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 19:46:21.444698   46924 command_runner.go:130] > [crio.metrics]
	I1009 19:46:21.444714   46924 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 19:46:21.444722   46924 command_runner.go:130] > enable_metrics = true
	I1009 19:46:21.444726   46924 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 19:46:21.444735   46924 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 19:46:21.444747   46924 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 19:46:21.444760   46924 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 19:46:21.444772   46924 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 19:46:21.444781   46924 command_runner.go:130] > # metrics_collectors = [
	I1009 19:46:21.444790   46924 command_runner.go:130] > # 	"operations",
	I1009 19:46:21.444800   46924 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1009 19:46:21.444809   46924 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1009 19:46:21.444816   46924 command_runner.go:130] > # 	"operations_errors",
	I1009 19:46:21.444823   46924 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1009 19:46:21.444832   46924 command_runner.go:130] > # 	"image_pulls_by_name",
	I1009 19:46:21.444842   46924 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1009 19:46:21.444852   46924 command_runner.go:130] > # 	"image_pulls_failures",
	I1009 19:46:21.444862   46924 command_runner.go:130] > # 	"image_pulls_successes",
	I1009 19:46:21.444870   46924 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 19:46:21.444880   46924 command_runner.go:130] > # 	"image_layer_reuse",
	I1009 19:46:21.444890   46924 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1009 19:46:21.444897   46924 command_runner.go:130] > # 	"containers_oom_total",
	I1009 19:46:21.444901   46924 command_runner.go:130] > # 	"containers_oom",
	I1009 19:46:21.444905   46924 command_runner.go:130] > # 	"processes_defunct",
	I1009 19:46:21.444914   46924 command_runner.go:130] > # 	"operations_total",
	I1009 19:46:21.444924   46924 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 19:46:21.444931   46924 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 19:46:21.444941   46924 command_runner.go:130] > # 	"operations_errors_total",
	I1009 19:46:21.444951   46924 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 19:46:21.444962   46924 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 19:46:21.444971   46924 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 19:46:21.444982   46924 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 19:46:21.444992   46924 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 19:46:21.444998   46924 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 19:46:21.445007   46924 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1009 19:46:21.445016   46924 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1009 19:46:21.445022   46924 command_runner.go:130] > # ]
	I1009 19:46:21.445033   46924 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 19:46:21.445042   46924 command_runner.go:130] > # metrics_port = 9090
	I1009 19:46:21.445052   46924 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 19:46:21.445061   46924 command_runner.go:130] > # metrics_socket = ""
	I1009 19:46:21.445072   46924 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 19:46:21.445084   46924 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 19:46:21.445093   46924 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 19:46:21.445103   46924 command_runner.go:130] > # certificate on any modification event.
	I1009 19:46:21.445113   46924 command_runner.go:130] > # metrics_cert = ""
	I1009 19:46:21.445122   46924 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 19:46:21.445133   46924 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 19:46:21.445143   46924 command_runner.go:130] > # metrics_key = ""
	I1009 19:46:21.445154   46924 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 19:46:21.445163   46924 command_runner.go:130] > [crio.tracing]
	I1009 19:46:21.445174   46924 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 19:46:21.445182   46924 command_runner.go:130] > # enable_tracing = false
	I1009 19:46:21.445187   46924 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 19:46:21.445197   46924 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1009 19:46:21.445216   46924 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1009 19:46:21.445226   46924 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 19:46:21.445236   46924 command_runner.go:130] > # CRI-O NRI configuration.
	I1009 19:46:21.445245   46924 command_runner.go:130] > [crio.nri]
	I1009 19:46:21.445254   46924 command_runner.go:130] > # Globally enable or disable NRI.
	I1009 19:46:21.445264   46924 command_runner.go:130] > # enable_nri = false
	I1009 19:46:21.445273   46924 command_runner.go:130] > # NRI socket to listen on.
	I1009 19:46:21.445281   46924 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1009 19:46:21.445285   46924 command_runner.go:130] > # NRI plugin directory to use.
	I1009 19:46:21.445294   46924 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1009 19:46:21.445306   46924 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1009 19:46:21.445315   46924 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1009 19:46:21.445331   46924 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1009 19:46:21.445340   46924 command_runner.go:130] > # nri_disable_connections = false
	I1009 19:46:21.445351   46924 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1009 19:46:21.445361   46924 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1009 19:46:21.445371   46924 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1009 19:46:21.445378   46924 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1009 19:46:21.445386   46924 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 19:46:21.445395   46924 command_runner.go:130] > [crio.stats]
	I1009 19:46:21.445410   46924 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 19:46:21.445421   46924 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 19:46:21.445430   46924 command_runner.go:130] > # stats_collection_period = 0
	I1009 19:46:21.445530   46924 cni.go:84] Creating CNI manager for ""
	I1009 19:46:21.445544   46924 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1009 19:46:21.445559   46924 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 19:46:21.445590   46924 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-707643 NodeName:multinode-707643 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 19:46:21.445731   46924 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-707643"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 19:46:21.445799   46924 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 19:46:21.455571   46924 command_runner.go:130] > kubeadm
	I1009 19:46:21.455588   46924 command_runner.go:130] > kubectl
	I1009 19:46:21.455593   46924 command_runner.go:130] > kubelet
	I1009 19:46:21.455608   46924 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 19:46:21.455652   46924 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 19:46:21.464750   46924 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1009 19:46:21.480502   46924 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 19:46:21.495895   46924 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1009 19:46:21.511503   46924 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I1009 19:46:21.515297   46924 command_runner.go:130] > 192.168.39.10	control-plane.minikube.internal
	I1009 19:46:21.515347   46924 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 19:46:21.650553   46924 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 19:46:21.666406   46924 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643 for IP: 192.168.39.10
	I1009 19:46:21.666431   46924 certs.go:194] generating shared ca certs ...
	I1009 19:46:21.666449   46924 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 19:46:21.666621   46924 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 19:46:21.666671   46924 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 19:46:21.666684   46924 certs.go:256] generating profile certs ...
	I1009 19:46:21.666794   46924 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/client.key
	I1009 19:46:21.666865   46924 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.key.ba20182f
	I1009 19:46:21.666909   46924 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.key
	I1009 19:46:21.666923   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 19:46:21.666941   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 19:46:21.666958   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 19:46:21.666975   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 19:46:21.666991   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 19:46:21.667007   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 19:46:21.667026   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 19:46:21.667044   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 19:46:21.667198   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 19:46:21.667244   46924 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 19:46:21.667259   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 19:46:21.667294   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 19:46:21.667324   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 19:46:21.667357   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 19:46:21.667408   46924 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 19:46:21.667445   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.667465   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem -> /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.667483   46924 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> /usr/share/ca-certificates/166072.pem
	I1009 19:46:21.668076   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 19:46:21.691784   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 19:46:21.713873   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 19:46:21.737163   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 19:46:21.759741   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 19:46:21.782475   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 19:46:21.805295   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 19:46:21.830327   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/multinode-707643/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 19:46:21.853493   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 19:46:21.876808   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 19:46:21.899853   46924 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 19:46:21.923113   46924 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 19:46:21.939346   46924 ssh_runner.go:195] Run: openssl version
	I1009 19:46:21.945688   46924 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1009 19:46:21.945771   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 19:46:21.956371   46924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.960732   46924 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.960878   46924 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.960929   46924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 19:46:21.966563   46924 command_runner.go:130] > b5213941
	I1009 19:46:21.966637   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 19:46:21.975864   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 19:46:21.986710   46924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.990998   46924 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.991026   46924 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.991058   46924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 19:46:21.996622   46924 command_runner.go:130] > 51391683
	I1009 19:46:21.996703   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 19:46:22.005602   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 19:46:22.015951   46924 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 19:46:22.020231   46924 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:46:22.020252   46924 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 19:46:22.020286   46924 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 19:46:22.025527   46924 command_runner.go:130] > 3ec20f2e
	I1009 19:46:22.025713   46924 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 19:46:22.034354   46924 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:46:22.038422   46924 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 19:46:22.038442   46924 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1009 19:46:22.038451   46924 command_runner.go:130] > Device: 253,1	Inode: 8384040     Links: 1
	I1009 19:46:22.038461   46924 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 19:46:22.038471   46924 command_runner.go:130] > Access: 2024-10-09 19:39:36.626993158 +0000
	I1009 19:46:22.038481   46924 command_runner.go:130] > Modify: 2024-10-09 19:39:36.626993158 +0000
	I1009 19:46:22.038489   46924 command_runner.go:130] > Change: 2024-10-09 19:39:36.626993158 +0000
	I1009 19:46:22.038499   46924 command_runner.go:130] >  Birth: 2024-10-09 19:39:36.626993158 +0000
	I1009 19:46:22.038649   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 19:46:22.044086   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.044148   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 19:46:22.049449   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.049505   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 19:46:22.054726   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.054930   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 19:46:22.060446   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.060510   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 19:46:22.066485   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.066544   46924 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 19:46:22.072059   46924 command_runner.go:130] > Certificate will not expire
	I1009 19:46:22.072133   46924 kubeadm.go:392] StartCluster: {Name:multinode-707643 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-707643 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.115 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.236 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:f
alse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:46:22.072273   46924 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 19:46:22.072339   46924 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 19:46:22.107184   46924 command_runner.go:130] > 137f5e5991a475816f0f2fd1ae1900dccb0ee8955add3ac300727bbfd51b268e
	I1009 19:46:22.107213   46924 command_runner.go:130] > 791a11c2b24e7aed80bbfe294018d137cdf325edc2ad6b7a05b00abab933fada
	I1009 19:46:22.107221   46924 command_runner.go:130] > 11e750a0fe4b254cb90bf3513caf97a0129c0866c7f35eea2190de27e986f9b7
	I1009 19:46:22.107228   46924 command_runner.go:130] > cf9bee6158f723ce4f9a1a4c961b79ab623bda893a438e809a36c0381b6ddbfb
	I1009 19:46:22.107234   46924 command_runner.go:130] > e48ae1f37a3b271ac5db564c8ee7e4e27ba72a41a7c53e9c2e6081ba9d9c21e8
	I1009 19:46:22.107239   46924 command_runner.go:130] > 9fd8c438bbd4f205686715faf46ce3310a5b05263c9bb6183f8733657747a4c1
	I1009 19:46:22.107244   46924 command_runner.go:130] > dda813262aeb7c542d6c02c55697b41375e79273caf8095406901f40a1563fec
	I1009 19:46:22.107252   46924 command_runner.go:130] > be275566ef7e4f33148de7fac7e82f5811095be11b8d5a2501f04768feefc372
	I1009 19:46:22.108628   46924 cri.go:89] found id: "137f5e5991a475816f0f2fd1ae1900dccb0ee8955add3ac300727bbfd51b268e"
	I1009 19:46:22.108651   46924 cri.go:89] found id: "791a11c2b24e7aed80bbfe294018d137cdf325edc2ad6b7a05b00abab933fada"
	I1009 19:46:22.108657   46924 cri.go:89] found id: "11e750a0fe4b254cb90bf3513caf97a0129c0866c7f35eea2190de27e986f9b7"
	I1009 19:46:22.108662   46924 cri.go:89] found id: "cf9bee6158f723ce4f9a1a4c961b79ab623bda893a438e809a36c0381b6ddbfb"
	I1009 19:46:22.108666   46924 cri.go:89] found id: "e48ae1f37a3b271ac5db564c8ee7e4e27ba72a41a7c53e9c2e6081ba9d9c21e8"
	I1009 19:46:22.108670   46924 cri.go:89] found id: "9fd8c438bbd4f205686715faf46ce3310a5b05263c9bb6183f8733657747a4c1"
	I1009 19:46:22.108674   46924 cri.go:89] found id: "dda813262aeb7c542d6c02c55697b41375e79273caf8095406901f40a1563fec"
	I1009 19:46:22.108678   46924 cri.go:89] found id: "be275566ef7e4f33148de7fac7e82f5811095be11b8d5a2501f04768feefc372"
	I1009 19:46:22.108681   46924 cri.go:89] found id: ""
	I1009 19:46:22.108723   46924 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-707643 -n multinode-707643
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-707643 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.17s)

                                                
                                    
x
+
TestPreload (272.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-805910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1009 19:54:51.614701   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:54:51.908363   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-805910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.990748213s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-805910 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-805910 image pull gcr.io/k8s-minikube/busybox: (3.199035054s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-805910
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-805910: exit status 82 (2m0.45709572s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-805910"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-805910 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-10-09 19:58:43.191048484 +0000 UTC m=+4334.180821409
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-805910 -n test-preload-805910
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-805910 -n test-preload-805910: exit status 3 (18.651831636s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:59:01.839385   51733 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.57:22: connect: no route to host
	E1009 19:59:01.839411   51733 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.57:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-805910" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-805910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-805910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-805910: (1.109672335s)
--- FAIL: TestPreload (272.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (394.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.997924287s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-790037] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-790037" primary control-plane node in "kubernetes-upgrade-790037" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:00:58.538448   52808 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:00:58.538638   52808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:00:58.538665   52808 out.go:358] Setting ErrFile to fd 2...
	I1009 20:00:58.538681   52808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:00:58.539073   52808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:00:58.540003   52808 out.go:352] Setting JSON to false
	I1009 20:00:58.540899   52808 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6200,"bootTime":1728497859,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:00:58.540988   52808 start.go:139] virtualization: kvm guest
	I1009 20:00:58.543594   52808 out.go:177] * [kubernetes-upgrade-790037] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:00:58.545389   52808 notify.go:220] Checking for updates...
	I1009 20:00:58.546926   52808 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:00:58.549799   52808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:00:58.551268   52808 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:00:58.552575   52808 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:00:58.554291   52808 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:00:58.555755   52808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:00:58.557405   52808 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:00:58.593896   52808 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 20:00:58.595231   52808 start.go:297] selected driver: kvm2
	I1009 20:00:58.595248   52808 start.go:901] validating driver "kvm2" against <nil>
	I1009 20:00:58.595262   52808 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:00:58.596010   52808 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:00:58.596102   52808 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:00:58.611533   52808 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:00:58.611589   52808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 20:00:58.611859   52808 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 20:00:58.611885   52808 cni.go:84] Creating CNI manager for ""
	I1009 20:00:58.611941   52808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:00:58.611953   52808 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 20:00:58.612022   52808 start.go:340] cluster config:
	{Name:kubernetes-upgrade-790037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-790037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:00:58.612148   52808 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:00:58.617315   52808 out.go:177] * Starting "kubernetes-upgrade-790037" primary control-plane node in "kubernetes-upgrade-790037" cluster
	I1009 20:00:58.619079   52808 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:00:58.619132   52808 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:00:58.619145   52808 cache.go:56] Caching tarball of preloaded images
	I1009 20:00:58.619245   52808 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:00:58.619259   52808 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:00:58.619544   52808 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/config.json ...
	I1009 20:00:58.619571   52808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/config.json: {Name:mk9b74bbe4bbe5c8da21e6601c6bdf20cbe03291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:00:58.619727   52808 start.go:360] acquireMachinesLock for kubernetes-upgrade-790037: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:00:58.619785   52808 start.go:364] duration metric: took 32.203µs to acquireMachinesLock for "kubernetes-upgrade-790037"
	I1009 20:00:58.619808   52808 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-790037 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-790037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:00:58.619879   52808 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 20:00:58.621603   52808 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 20:00:58.621733   52808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:00:58.621780   52808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:00:58.636186   52808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35587
	I1009 20:00:58.636625   52808 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:00:58.637223   52808 main.go:141] libmachine: Using API Version  1
	I1009 20:00:58.637328   52808 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:00:58.637643   52808 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:00:58.637864   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetMachineName
	I1009 20:00:58.638031   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:00:58.638224   52808 start.go:159] libmachine.API.Create for "kubernetes-upgrade-790037" (driver="kvm2")
	I1009 20:00:58.638253   52808 client.go:168] LocalClient.Create starting
	I1009 20:00:58.638285   52808 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 20:00:58.638319   52808 main.go:141] libmachine: Decoding PEM data...
	I1009 20:00:58.638341   52808 main.go:141] libmachine: Parsing certificate...
	I1009 20:00:58.638405   52808 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 20:00:58.638435   52808 main.go:141] libmachine: Decoding PEM data...
	I1009 20:00:58.638458   52808 main.go:141] libmachine: Parsing certificate...
	I1009 20:00:58.638503   52808 main.go:141] libmachine: Running pre-create checks...
	I1009 20:00:58.638539   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .PreCreateCheck
	I1009 20:00:58.638900   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetConfigRaw
	I1009 20:00:58.639310   52808 main.go:141] libmachine: Creating machine...
	I1009 20:00:58.639321   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .Create
	I1009 20:00:58.639445   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Creating KVM machine...
	I1009 20:00:58.640831   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found existing default KVM network
	I1009 20:00:58.641498   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:00:58.641369   52847 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000123a30}
	I1009 20:00:58.641523   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | created network xml: 
	I1009 20:00:58.641531   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | <network>
	I1009 20:00:58.641541   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |   <name>mk-kubernetes-upgrade-790037</name>
	I1009 20:00:58.641548   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |   <dns enable='no'/>
	I1009 20:00:58.641552   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |   
	I1009 20:00:58.641564   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 20:00:58.641574   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |     <dhcp>
	I1009 20:00:58.641585   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 20:00:58.641598   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |     </dhcp>
	I1009 20:00:58.641611   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |   </ip>
	I1009 20:00:58.641620   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG |   
	I1009 20:00:58.641629   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | </network>
	I1009 20:00:58.641636   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | 
	I1009 20:00:58.647390   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | trying to create private KVM network mk-kubernetes-upgrade-790037 192.168.39.0/24...
	I1009 20:00:58.711202   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | private KVM network mk-kubernetes-upgrade-790037 192.168.39.0/24 created
	I1009 20:00:58.711266   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:00:58.711140   52847 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:00:58.711285   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037 ...
	I1009 20:00:58.711308   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 20:00:58.711322   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 20:00:58.957882   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:00:58.957733   52847 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa...
	I1009 20:00:59.105138   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:00:59.104998   52847 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/kubernetes-upgrade-790037.rawdisk...
	I1009 20:00:59.105171   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Writing magic tar header
	I1009 20:00:59.105190   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Writing SSH key tar header
	I1009 20:00:59.105204   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:00:59.105111   52847 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037 ...
	I1009 20:00:59.105225   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037
	I1009 20:00:59.105243   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 20:00:59.105257   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037 (perms=drwx------)
	I1009 20:00:59.105281   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 20:00:59.105301   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:00:59.105313   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 20:00:59.105325   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 20:00:59.105333   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 20:00:59.105340   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 20:00:59.105351   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Creating domain...
	I1009 20:00:59.105367   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 20:00:59.105384   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 20:00:59.105404   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Checking permissions on dir: /home/jenkins
	I1009 20:00:59.105415   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Checking permissions on dir: /home
	I1009 20:00:59.105424   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Skipping /home - not owner
	I1009 20:00:59.106528   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) define libvirt domain using xml: 
	I1009 20:00:59.106549   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) <domain type='kvm'>
	I1009 20:00:59.106560   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   <name>kubernetes-upgrade-790037</name>
	I1009 20:00:59.106569   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   <memory unit='MiB'>2200</memory>
	I1009 20:00:59.106578   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   <vcpu>2</vcpu>
	I1009 20:00:59.106589   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   <features>
	I1009 20:00:59.106595   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <acpi/>
	I1009 20:00:59.106602   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <apic/>
	I1009 20:00:59.106608   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <pae/>
	I1009 20:00:59.106615   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     
	I1009 20:00:59.106620   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   </features>
	I1009 20:00:59.106630   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   <cpu mode='host-passthrough'>
	I1009 20:00:59.106638   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   
	I1009 20:00:59.106650   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   </cpu>
	I1009 20:00:59.106659   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   <os>
	I1009 20:00:59.106669   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <type>hvm</type>
	I1009 20:00:59.106678   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <boot dev='cdrom'/>
	I1009 20:00:59.106685   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <boot dev='hd'/>
	I1009 20:00:59.106690   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <bootmenu enable='no'/>
	I1009 20:00:59.106697   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   </os>
	I1009 20:00:59.106702   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   <devices>
	I1009 20:00:59.106712   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <disk type='file' device='cdrom'>
	I1009 20:00:59.106728   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/boot2docker.iso'/>
	I1009 20:00:59.106748   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <target dev='hdc' bus='scsi'/>
	I1009 20:00:59.106757   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <readonly/>
	I1009 20:00:59.106764   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     </disk>
	I1009 20:00:59.106773   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <disk type='file' device='disk'>
	I1009 20:00:59.106785   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 20:00:59.106802   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/kubernetes-upgrade-790037.rawdisk'/>
	I1009 20:00:59.106816   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <target dev='hda' bus='virtio'/>
	I1009 20:00:59.106835   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     </disk>
	I1009 20:00:59.106846   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <interface type='network'>
	I1009 20:00:59.106858   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <source network='mk-kubernetes-upgrade-790037'/>
	I1009 20:00:59.106875   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <model type='virtio'/>
	I1009 20:00:59.106885   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     </interface>
	I1009 20:00:59.106893   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <interface type='network'>
	I1009 20:00:59.106899   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <source network='default'/>
	I1009 20:00:59.106905   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <model type='virtio'/>
	I1009 20:00:59.106910   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     </interface>
	I1009 20:00:59.106916   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <serial type='pty'>
	I1009 20:00:59.106944   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <target port='0'/>
	I1009 20:00:59.106966   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     </serial>
	I1009 20:00:59.106984   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <console type='pty'>
	I1009 20:00:59.107003   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <target type='serial' port='0'/>
	I1009 20:00:59.107015   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     </console>
	I1009 20:00:59.107031   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     <rng model='virtio'>
	I1009 20:00:59.107040   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)       <backend model='random'>/dev/random</backend>
	I1009 20:00:59.107048   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     </rng>
	I1009 20:00:59.107075   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     
	I1009 20:00:59.107082   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)     
	I1009 20:00:59.107093   52808 main.go:141] libmachine: (kubernetes-upgrade-790037)   </devices>
	I1009 20:00:59.107102   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) </domain>
	I1009 20:00:59.107111   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) 
	I1009 20:00:59.111625   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:dd:08:1d in network default
	I1009 20:00:59.112186   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Ensuring networks are active...
	I1009 20:00:59.112207   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:00:59.112870   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Ensuring network default is active
	I1009 20:00:59.113156   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Ensuring network mk-kubernetes-upgrade-790037 is active
	I1009 20:00:59.113590   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Getting domain xml...
	I1009 20:00:59.114418   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Creating domain...
	I1009 20:01:00.393500   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Waiting to get IP...
	I1009 20:01:00.394289   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:00.394843   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:00.394872   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:00.394818   52847 retry.go:31] will retry after 305.493472ms: waiting for machine to come up
	I1009 20:01:00.702911   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:00.704016   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:00.704041   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:00.703960   52847 retry.go:31] will retry after 312.042614ms: waiting for machine to come up
	I1009 20:01:01.017509   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:01.017967   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:01.017995   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:01.017917   52847 retry.go:31] will retry after 433.617706ms: waiting for machine to come up
	I1009 20:01:01.453410   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:01.453801   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:01.453844   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:01.453801   52847 retry.go:31] will retry after 377.793393ms: waiting for machine to come up
	I1009 20:01:01.833557   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:01.833959   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:01.833987   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:01.833904   52847 retry.go:31] will retry after 587.005324ms: waiting for machine to come up
	I1009 20:01:02.422778   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:02.423231   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:02.423283   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:02.423180   52847 retry.go:31] will retry after 664.430432ms: waiting for machine to come up
	I1009 20:01:03.088982   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:03.089423   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:03.089459   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:03.089362   52847 retry.go:31] will retry after 921.393085ms: waiting for machine to come up
	I1009 20:01:04.011995   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:04.012426   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:04.012456   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:04.012369   52847 retry.go:31] will retry after 967.330602ms: waiting for machine to come up
	I1009 20:01:04.981112   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:04.981575   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:04.981596   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:04.981523   52847 retry.go:31] will retry after 1.791792601s: waiting for machine to come up
	I1009 20:01:06.775391   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:06.775822   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:06.775854   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:06.775751   52847 retry.go:31] will retry after 2.231821799s: waiting for machine to come up
	I1009 20:01:09.008996   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:09.009464   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:09.009494   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:09.009418   52847 retry.go:31] will retry after 1.867512717s: waiting for machine to come up
	I1009 20:01:10.878403   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:10.878788   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:10.878819   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:10.878738   52847 retry.go:31] will retry after 2.511854581s: waiting for machine to come up
	I1009 20:01:13.393094   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:13.393519   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:13.393548   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:13.393475   52847 retry.go:31] will retry after 4.080390173s: waiting for machine to come up
	I1009 20:01:17.478151   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:17.478523   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find current IP address of domain kubernetes-upgrade-790037 in network mk-kubernetes-upgrade-790037
	I1009 20:01:17.478548   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | I1009 20:01:17.478495   52847 retry.go:31] will retry after 4.691790633s: waiting for machine to come up
	I1009 20:01:22.174127   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.174667   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has current primary IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.174685   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Found IP for machine: 192.168.39.62
	I1009 20:01:22.174693   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Reserving static IP address...
	I1009 20:01:22.175206   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-790037", mac: "52:54:00:84:52:fc", ip: "192.168.39.62"} in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.248814   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Reserved static IP address: 192.168.39.62
	I1009 20:01:22.248845   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Waiting for SSH to be available...
	I1009 20:01:22.248856   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Getting to WaitForSSH function...
	I1009 20:01:22.251308   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.251671   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:minikube Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:22.251693   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.251808   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Using SSH client type: external
	I1009 20:01:22.251826   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa (-rw-------)
	I1009 20:01:22.251860   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.62 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:01:22.251873   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | About to run SSH command:
	I1009 20:01:22.251885   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | exit 0
	I1009 20:01:22.375159   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | SSH cmd err, output: <nil>: 
	I1009 20:01:22.375443   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) KVM machine creation complete!
	I1009 20:01:22.375728   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetConfigRaw
	I1009 20:01:22.376328   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:01:22.376525   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:01:22.376691   52808 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 20:01:22.376705   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetState
	I1009 20:01:22.377905   52808 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 20:01:22.377920   52808 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 20:01:22.377925   52808 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 20:01:22.377934   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:22.380342   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.380713   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:22.380743   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.380856   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:22.381003   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:22.381158   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:22.381309   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:22.381472   52808 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:22.381720   52808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1009 20:01:22.381739   52808 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 20:01:22.482474   52808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:01:22.482502   52808 main.go:141] libmachine: Detecting the provisioner...
	I1009 20:01:22.482514   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:22.485523   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.485847   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:22.485885   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.486034   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:22.486214   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:22.486361   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:22.486459   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:22.486597   52808 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:22.486757   52808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1009 20:01:22.486768   52808 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 20:01:22.592043   52808 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 20:01:22.592137   52808 main.go:141] libmachine: found compatible host: buildroot
	I1009 20:01:22.592151   52808 main.go:141] libmachine: Provisioning with buildroot...
	I1009 20:01:22.592163   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetMachineName
	I1009 20:01:22.592424   52808 buildroot.go:166] provisioning hostname "kubernetes-upgrade-790037"
	I1009 20:01:22.592459   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetMachineName
	I1009 20:01:22.592643   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:22.595103   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.595453   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:22.595484   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.595654   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:22.595831   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:22.595970   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:22.596085   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:22.596218   52808 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:22.596380   52808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1009 20:01:22.596391   52808 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-790037 && echo "kubernetes-upgrade-790037" | sudo tee /etc/hostname
	I1009 20:01:22.714017   52808 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-790037
	
	I1009 20:01:22.714045   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:22.716601   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.716962   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:22.716986   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.717140   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:22.717365   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:22.717521   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:22.717686   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:22.717856   52808 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:22.718030   52808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1009 20:01:22.718051   52808 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-790037' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-790037/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-790037' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:01:22.828500   52808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:01:22.828534   52808 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:01:22.828577   52808 buildroot.go:174] setting up certificates
	I1009 20:01:22.828591   52808 provision.go:84] configureAuth start
	I1009 20:01:22.828603   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetMachineName
	I1009 20:01:22.828882   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetIP
	I1009 20:01:22.831529   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.831871   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:22.831901   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.832058   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:22.834330   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.834624   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:22.834642   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:22.834778   52808 provision.go:143] copyHostCerts
	I1009 20:01:22.834837   52808 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:01:22.834845   52808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:01:22.834906   52808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:01:22.834999   52808 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:01:22.835006   52808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:01:22.835029   52808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:01:22.835109   52808 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:01:22.835120   52808 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:01:22.835144   52808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:01:22.835189   52808 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-790037 san=[127.0.0.1 192.168.39.62 kubernetes-upgrade-790037 localhost minikube]
	I1009 20:01:23.187363   52808 provision.go:177] copyRemoteCerts
	I1009 20:01:23.187442   52808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:01:23.187472   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:23.190084   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.190434   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.190471   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.190600   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:23.190765   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:23.190893   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:23.191017   52808 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa Username:docker}
	I1009 20:01:23.269407   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:01:23.294045   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 20:01:23.318870   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:01:23.345881   52808 provision.go:87] duration metric: took 517.276375ms to configureAuth
	I1009 20:01:23.345913   52808 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:01:23.346065   52808 config.go:182] Loaded profile config "kubernetes-upgrade-790037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:01:23.346157   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:23.349088   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.349439   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.349491   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.349669   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:23.349836   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:23.349968   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:23.350075   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:23.350243   52808 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:23.350403   52808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1009 20:01:23.350417   52808 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:01:23.573224   52808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:01:23.573254   52808 main.go:141] libmachine: Checking connection to Docker...
	I1009 20:01:23.573274   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetURL
	I1009 20:01:23.574620   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Using libvirt version 6000000
	I1009 20:01:23.577191   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.577615   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.577648   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.577807   52808 main.go:141] libmachine: Docker is up and running!
	I1009 20:01:23.577824   52808 main.go:141] libmachine: Reticulating splines...
	I1009 20:01:23.577830   52808 client.go:171] duration metric: took 24.939568088s to LocalClient.Create
	I1009 20:01:23.577851   52808 start.go:167] duration metric: took 24.939628251s to libmachine.API.Create "kubernetes-upgrade-790037"
	I1009 20:01:23.577858   52808 start.go:293] postStartSetup for "kubernetes-upgrade-790037" (driver="kvm2")
	I1009 20:01:23.577869   52808 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:01:23.577884   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:01:23.578110   52808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:01:23.578135   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:23.580339   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.580707   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.580737   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.580858   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:23.581065   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:23.581259   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:23.581345   52808 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa Username:docker}
	I1009 20:01:23.664075   52808 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:01:23.668775   52808 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:01:23.668797   52808 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:01:23.668861   52808 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:01:23.668954   52808 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:01:23.669070   52808 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:01:23.678059   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:01:23.702331   52808 start.go:296] duration metric: took 124.458312ms for postStartSetup
	I1009 20:01:23.702380   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetConfigRaw
	I1009 20:01:23.703109   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetIP
	I1009 20:01:23.706113   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.706448   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.706479   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.706756   52808 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/config.json ...
	I1009 20:01:23.706966   52808 start.go:128] duration metric: took 25.087077842s to createHost
	I1009 20:01:23.706989   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:23.709471   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.709778   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.709804   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.709961   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:23.710137   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:23.710291   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:23.710427   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:23.710610   52808 main.go:141] libmachine: Using SSH client type: native
	I1009 20:01:23.710806   52808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.62 22 <nil> <nil>}
	I1009 20:01:23.710826   52808 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:01:23.811724   52808 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728504083.787436760
	
	I1009 20:01:23.811749   52808 fix.go:216] guest clock: 1728504083.787436760
	I1009 20:01:23.811759   52808 fix.go:229] Guest: 2024-10-09 20:01:23.78743676 +0000 UTC Remote: 2024-10-09 20:01:23.706978593 +0000 UTC m=+25.209709299 (delta=80.458167ms)
	I1009 20:01:23.811783   52808 fix.go:200] guest clock delta is within tolerance: 80.458167ms
	I1009 20:01:23.811793   52808 start.go:83] releasing machines lock for "kubernetes-upgrade-790037", held for 25.191997543s
	I1009 20:01:23.811819   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:01:23.812096   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetIP
	I1009 20:01:23.815000   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.815425   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.815458   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.815662   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:01:23.816181   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:01:23.816356   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:01:23.816449   52808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:01:23.816487   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:23.816631   52808 ssh_runner.go:195] Run: cat /version.json
	I1009 20:01:23.816656   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:01:23.819292   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.819428   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.819657   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.819681   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.819927   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:23.819941   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:23.819972   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:23.820118   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:23.820123   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:01:23.820278   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:01:23.820286   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:23.820422   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:01:23.820478   52808 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa Username:docker}
	I1009 20:01:23.820593   52808 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa Username:docker}
	I1009 20:01:23.897095   52808 ssh_runner.go:195] Run: systemctl --version
	I1009 20:01:23.924444   52808 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:01:24.087869   52808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:01:24.094884   52808 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:01:24.094960   52808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:01:24.120153   52808 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:01:24.120178   52808 start.go:495] detecting cgroup driver to use...
	I1009 20:01:24.120246   52808 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:01:24.137664   52808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:01:24.151557   52808 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:01:24.151635   52808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:01:24.166008   52808 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:01:24.180925   52808 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:01:24.316901   52808 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:01:24.485788   52808 docker.go:233] disabling docker service ...
	I1009 20:01:24.485863   52808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:01:24.500692   52808 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:01:24.514250   52808 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:01:24.655051   52808 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:01:24.778000   52808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:01:24.795464   52808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:01:24.815877   52808 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:01:24.815928   52808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:24.826203   52808 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:01:24.826312   52808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:24.837409   52808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:24.847648   52808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:01:24.857941   52808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:01:24.869599   52808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:01:24.882565   52808 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:01:24.882654   52808 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:01:24.899568   52808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:01:24.909725   52808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:01:25.032238   52808 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:01:25.143430   52808 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:01:25.143504   52808 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:01:25.150953   52808 start.go:563] Will wait 60s for crictl version
	I1009 20:01:25.151013   52808 ssh_runner.go:195] Run: which crictl
	I1009 20:01:25.154972   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:01:25.198391   52808 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:01:25.198472   52808 ssh_runner.go:195] Run: crio --version
	I1009 20:01:25.228175   52808 ssh_runner.go:195] Run: crio --version
	I1009 20:01:25.261098   52808 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:01:25.262524   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetIP
	I1009 20:01:25.265839   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:25.266259   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:01:13 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:01:25.266291   52808 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:01:25.266587   52808 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 20:01:25.271226   52808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:01:25.285063   52808 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-790037 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-790037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:01:25.285154   52808 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:01:25.285195   52808 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:01:25.320112   52808 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:01:25.320169   52808 ssh_runner.go:195] Run: which lz4
	I1009 20:01:25.324634   52808 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:01:25.329040   52808 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:01:25.329069   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:01:27.061041   52808 crio.go:462] duration metric: took 1.736464552s to copy over tarball
	I1009 20:01:27.061125   52808 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:01:29.661628   52808 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.600432816s)
	I1009 20:01:29.661680   52808 crio.go:469] duration metric: took 2.600604371s to extract the tarball
	I1009 20:01:29.661692   52808 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:01:29.706464   52808 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:01:29.761259   52808 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:01:29.761288   52808 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:01:29.761358   52808 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:01:29.761378   52808 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:01:29.761392   52808 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:01:29.761402   52808 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:01:29.761446   52808 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:01:29.761485   52808 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:01:29.761368   52808 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:01:29.761368   52808 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:01:29.762974   52808 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:01:29.763029   52808 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:01:29.763030   52808 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:01:29.763075   52808 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:01:29.763089   52808 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:01:29.763000   52808 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:01:29.762984   52808 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:01:29.763008   52808 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:01:29.938080   52808 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:01:29.982208   52808 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:01:29.982247   52808 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:01:29.982293   52808 ssh_runner.go:195] Run: which crictl
	I1009 20:01:29.986442   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:01:29.989794   52808 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:01:30.004865   52808 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:01:30.031346   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:01:30.043314   52808 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:01:30.043360   52808 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:01:30.043423   52808 ssh_runner.go:195] Run: which crictl
	I1009 20:01:30.091755   52808 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:01:30.091802   52808 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:01:30.091865   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:01:30.091882   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:01:30.091870   52808 ssh_runner.go:195] Run: which crictl
	I1009 20:01:30.112056   52808 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:01:30.113054   52808 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:01:30.134480   52808 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:01:30.168044   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:01:30.174884   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:01:30.174926   52808 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:01:30.203363   52808 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:01:30.297737   52808 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:01:30.297768   52808 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:01:30.297783   52808 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:01:30.297798   52808 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:01:30.297826   52808 ssh_runner.go:195] Run: which crictl
	I1009 20:01:30.297836   52808 ssh_runner.go:195] Run: which crictl
	I1009 20:01:30.297862   52808 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:01:30.297914   52808 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:01:30.297928   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:01:30.297949   52808 ssh_runner.go:195] Run: which crictl
	I1009 20:01:30.308581   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:01:30.323650   52808 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:01:30.323697   52808 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:01:30.323746   52808 ssh_runner.go:195] Run: which crictl
	I1009 20:01:30.323751   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:01:30.323790   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:01:30.386727   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:01:30.386984   52808 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:01:30.410681   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:01:30.410832   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:01:30.424970   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:01:30.424976   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:01:30.445967   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:01:30.518553   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:01:30.518567   52808 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:01:30.560564   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:01:30.560603   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:01:30.560666   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:01:30.568252   52808 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:01:30.654328   52808 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:01:30.654404   52808 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:01:30.654437   52808 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:01:30.658542   52808 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:01:30.907737   52808 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:01:31.047277   52808 cache_images.go:92] duration metric: took 1.285971451s to LoadCachedImages
	W1009 20:01:31.047389   52808 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1009 20:01:31.047407   52808 kubeadm.go:934] updating node { 192.168.39.62 8443 v1.20.0 crio true true} ...
	I1009 20:01:31.047539   52808 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-790037 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.62
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-790037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:01:31.047621   52808 ssh_runner.go:195] Run: crio config
	I1009 20:01:31.094507   52808 cni.go:84] Creating CNI manager for ""
	I1009 20:01:31.094534   52808 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:01:31.094550   52808 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:01:31.094575   52808 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.62 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-790037 NodeName:kubernetes-upgrade-790037 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.62"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.62 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:01:31.094732   52808 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.62
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-790037"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.62
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.62"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:01:31.094804   52808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:01:31.105035   52808 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:01:31.105106   52808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:01:31.118835   52808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1009 20:01:31.140007   52808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:01:31.158102   52808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:01:31.175324   52808 ssh_runner.go:195] Run: grep 192.168.39.62	control-plane.minikube.internal$ /etc/hosts
	I1009 20:01:31.179229   52808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.62	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:01:31.192126   52808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:01:31.321885   52808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:01:31.340290   52808 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037 for IP: 192.168.39.62
	I1009 20:01:31.340316   52808 certs.go:194] generating shared ca certs ...
	I1009 20:01:31.340336   52808 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:31.340473   52808 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:01:31.340513   52808 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:01:31.340523   52808 certs.go:256] generating profile certs ...
	I1009 20:01:31.340576   52808 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/client.key
	I1009 20:01:31.340593   52808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/client.crt with IP's: []
	I1009 20:01:31.521628   52808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/client.crt ...
	I1009 20:01:31.521659   52808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/client.crt: {Name:mkcd69d0ec6aacff90f0c2ae108fa34ed1170f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:31.521839   52808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/client.key ...
	I1009 20:01:31.521856   52808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/client.key: {Name:mk433d48cbf3747330c7a47c0979da5b10a0c93f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:31.521962   52808 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.key.b1cfebef
	I1009 20:01:31.521983   52808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.crt.b1cfebef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.62]
	I1009 20:01:31.696900   52808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.crt.b1cfebef ...
	I1009 20:01:31.696927   52808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.crt.b1cfebef: {Name:mk65d93b382e79603a4359d29366c3283f430c3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:31.697084   52808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.key.b1cfebef ...
	I1009 20:01:31.697099   52808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.key.b1cfebef: {Name:mkbd07335212ae772b77af922100a1d4179405c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:31.697167   52808 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.crt.b1cfebef -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.crt
	I1009 20:01:31.697267   52808 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.key.b1cfebef -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.key
	I1009 20:01:31.697328   52808 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/proxy-client.key
	I1009 20:01:31.697344   52808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/proxy-client.crt with IP's: []
	I1009 20:01:31.786023   52808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/proxy-client.crt ...
	I1009 20:01:31.786051   52808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/proxy-client.crt: {Name:mk8439468b6664f943c0df934a3b81a4e0ff0609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:31.786213   52808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/proxy-client.key ...
	I1009 20:01:31.786228   52808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/proxy-client.key: {Name:mkff6733b5e81d8933578bb0e3d275c920f8a967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:01:31.786393   52808 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:01:31.786429   52808 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:01:31.786439   52808 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:01:31.786461   52808 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:01:31.786484   52808 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:01:31.786503   52808 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:01:31.786546   52808 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:01:31.787206   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:01:31.816649   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:01:31.844709   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:01:31.872267   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:01:31.897796   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1009 20:01:31.922800   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:01:31.947846   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:01:31.972556   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/kubernetes-upgrade-790037/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:01:31.997191   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:01:32.024822   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:01:32.052306   52808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:01:32.080836   52808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:01:32.097931   52808 ssh_runner.go:195] Run: openssl version
	I1009 20:01:32.104026   52808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:01:32.115733   52808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:01:32.120564   52808 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:01:32.120626   52808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:01:32.126471   52808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:01:32.137757   52808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:01:32.148566   52808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:01:32.153080   52808 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:01:32.153130   52808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:01:32.158968   52808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:01:32.175264   52808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:01:32.191802   52808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:01:32.196705   52808 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:01:32.196754   52808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:01:32.206346   52808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:01:32.223040   52808 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:01:32.229846   52808 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:01:32.229913   52808 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-790037 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-790037 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.62 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:01:32.230020   52808 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:01:32.230081   52808 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:01:32.280425   52808 cri.go:89] found id: ""
	I1009 20:01:32.280498   52808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:01:32.290487   52808 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:01:32.300500   52808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:01:32.309758   52808 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:01:32.309779   52808 kubeadm.go:157] found existing configuration files:
	
	I1009 20:01:32.309825   52808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:01:32.319131   52808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:01:32.319201   52808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:01:32.328826   52808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:01:32.337861   52808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:01:32.337937   52808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:01:32.347178   52808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:01:32.356411   52808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:01:32.356477   52808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:01:32.366180   52808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:01:32.374863   52808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:01:32.374926   52808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:01:32.384162   52808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:01:32.648560   52808 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:03:31.297212   52808 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:03:31.297304   52808 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:03:31.298809   52808 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:03:31.298887   52808 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:03:31.299013   52808 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:03:31.299160   52808 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:03:31.299287   52808 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:03:31.299406   52808 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:03:31.301024   52808 out.go:235]   - Generating certificates and keys ...
	I1009 20:03:31.301115   52808 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:03:31.301202   52808 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:03:31.301294   52808 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:03:31.301370   52808 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:03:31.301453   52808 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:03:31.301527   52808 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 20:03:31.301601   52808 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 20:03:31.301753   52808 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-790037 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I1009 20:03:31.301818   52808 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 20:03:31.301991   52808 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-790037 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	I1009 20:03:31.302089   52808 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:03:31.302177   52808 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:03:31.302242   52808 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 20:03:31.302321   52808 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:03:31.302386   52808 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:03:31.302467   52808 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:03:31.302554   52808 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:03:31.302627   52808 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:03:31.302777   52808 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:03:31.302888   52808 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:03:31.302950   52808 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:03:31.303025   52808 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:03:31.304372   52808 out.go:235]   - Booting up control plane ...
	I1009 20:03:31.304461   52808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:03:31.304545   52808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:03:31.304630   52808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:03:31.304736   52808 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:03:31.304913   52808 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:03:31.304969   52808 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:03:31.305040   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:03:31.305287   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:03:31.305388   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:03:31.305662   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:03:31.305760   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:03:31.305999   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:03:31.306064   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:03:31.306217   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:03:31.306291   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:03:31.306484   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:03:31.306493   52808 kubeadm.go:310] 
	I1009 20:03:31.306548   52808 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:03:31.306583   52808 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:03:31.306597   52808 kubeadm.go:310] 
	I1009 20:03:31.306654   52808 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:03:31.306697   52808 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:03:31.306836   52808 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:03:31.306847   52808 kubeadm.go:310] 
	I1009 20:03:31.306944   52808 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:03:31.306976   52808 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:03:31.307003   52808 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:03:31.307009   52808 kubeadm.go:310] 
	I1009 20:03:31.307127   52808 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:03:31.307193   52808 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:03:31.307199   52808 kubeadm.go:310] 
	I1009 20:03:31.307322   52808 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:03:31.307443   52808 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:03:31.307528   52808 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:03:31.307605   52808 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:03:31.307629   52808 kubeadm.go:310] 
	W1009 20:03:31.307739   52808 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-790037 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-790037 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-790037 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-790037 localhost] and IPs [192.168.39.62 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:03:31.307784   52808 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:03:31.806521   52808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:03:31.821864   52808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:03:31.832007   52808 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:03:31.832029   52808 kubeadm.go:157] found existing configuration files:
	
	I1009 20:03:31.832078   52808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:03:31.841455   52808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:03:31.841523   52808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:03:31.851934   52808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:03:31.861072   52808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:03:31.861127   52808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:03:31.870807   52808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:03:31.880379   52808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:03:31.880448   52808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:03:31.890807   52808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:03:31.899661   52808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:03:31.899716   52808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:03:31.908770   52808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:03:32.135299   52808 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:05:27.943523   52808 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:05:27.943619   52808 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:05:27.945648   52808 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:05:27.945712   52808 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:05:27.945801   52808 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:05:27.945929   52808 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:05:27.946106   52808 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:05:27.946251   52808 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:05:28.080604   52808 out.go:235]   - Generating certificates and keys ...
	I1009 20:05:28.080749   52808 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:05:28.080843   52808 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:05:28.080978   52808 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:05:28.081072   52808 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:05:28.081182   52808 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:05:28.081283   52808 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:05:28.081380   52808 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:05:28.081436   52808 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:05:28.081499   52808 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:05:28.081568   52808 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:05:28.081616   52808 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:05:28.081668   52808 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:05:28.081729   52808 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:05:28.081798   52808 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:05:28.081896   52808 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:05:28.081977   52808 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:05:28.082111   52808 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:05:28.082244   52808 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:05:28.082316   52808 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:05:28.082405   52808 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:05:28.276162   52808 out.go:235]   - Booting up control plane ...
	I1009 20:05:28.276334   52808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:05:28.276448   52808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:05:28.276540   52808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:05:28.276649   52808 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:05:28.276883   52808 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:05:28.276956   52808 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:05:28.277053   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:05:28.277304   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:05:28.277414   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:05:28.277645   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:05:28.277742   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:05:28.277912   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:05:28.277973   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:05:28.278252   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:05:28.278362   52808 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:05:28.278606   52808 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:05:28.278683   52808 kubeadm.go:310] 
	I1009 20:05:28.278741   52808 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:05:28.278800   52808 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:05:28.278817   52808 kubeadm.go:310] 
	I1009 20:05:28.278862   52808 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:05:28.278920   52808 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:05:28.279137   52808 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:05:28.279157   52808 kubeadm.go:310] 
	I1009 20:05:28.279332   52808 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:05:28.279395   52808 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:05:28.279456   52808 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:05:28.279472   52808 kubeadm.go:310] 
	I1009 20:05:28.279606   52808 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:05:28.279732   52808 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:05:28.279746   52808 kubeadm.go:310] 
	I1009 20:05:28.279914   52808 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:05:28.280045   52808 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:05:28.280151   52808 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:05:28.280257   52808 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:05:28.280292   52808 kubeadm.go:310] 
	I1009 20:05:28.280340   52808 kubeadm.go:394] duration metric: took 3m56.050428874s to StartCluster
	I1009 20:05:28.280382   52808 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:05:28.280441   52808 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:05:28.339449   52808 cri.go:89] found id: ""
	I1009 20:05:28.339483   52808 logs.go:282] 0 containers: []
	W1009 20:05:28.339495   52808 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:05:28.339502   52808 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:05:28.339570   52808 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:05:28.378576   52808 cri.go:89] found id: ""
	I1009 20:05:28.378608   52808 logs.go:282] 0 containers: []
	W1009 20:05:28.378618   52808 logs.go:284] No container was found matching "etcd"
	I1009 20:05:28.378625   52808 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:05:28.378687   52808 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:05:28.414057   52808 cri.go:89] found id: ""
	I1009 20:05:28.414089   52808 logs.go:282] 0 containers: []
	W1009 20:05:28.414103   52808 logs.go:284] No container was found matching "coredns"
	I1009 20:05:28.414112   52808 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:05:28.414176   52808 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:05:28.447344   52808 cri.go:89] found id: ""
	I1009 20:05:28.447374   52808 logs.go:282] 0 containers: []
	W1009 20:05:28.447385   52808 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:05:28.447393   52808 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:05:28.447455   52808 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:05:28.485058   52808 cri.go:89] found id: ""
	I1009 20:05:28.485086   52808 logs.go:282] 0 containers: []
	W1009 20:05:28.485097   52808 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:05:28.485104   52808 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:05:28.485162   52808 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:05:28.526301   52808 cri.go:89] found id: ""
	I1009 20:05:28.526334   52808 logs.go:282] 0 containers: []
	W1009 20:05:28.526345   52808 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:05:28.526354   52808 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:05:28.526415   52808 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:05:28.563859   52808 cri.go:89] found id: ""
	I1009 20:05:28.563890   52808 logs.go:282] 0 containers: []
	W1009 20:05:28.563905   52808 logs.go:284] No container was found matching "kindnet"
	I1009 20:05:28.563917   52808 logs.go:123] Gathering logs for kubelet ...
	I1009 20:05:28.563934   52808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:05:28.640265   52808 logs.go:123] Gathering logs for dmesg ...
	I1009 20:05:28.640298   52808 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:05:28.656092   52808 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:05:28.656124   52808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:05:28.790237   52808 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:05:28.790260   52808 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:05:28.790276   52808 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:05:28.903198   52808 logs.go:123] Gathering logs for container status ...
	I1009 20:05:28.903233   52808 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:05:28.952936   52808 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:05:28.952997   52808 out.go:270] * 
	* 
	W1009 20:05:28.953059   52808 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:05:28.953079   52808 out.go:270] * 
	* 
	W1009 20:05:28.954286   52808 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:05:29.111348   52808 out.go:201] 
	W1009 20:05:29.232177   52808 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:05:29.232241   52808 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:05:29.232274   52808 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:05:29.377716   52808 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-790037
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-790037: (1.462763885s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-790037 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-790037 status --format={{.Host}}: exit status 7 (80.418393ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.261527422s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-790037 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.107761ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-790037] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-790037
	    minikube start -p kubernetes-upgrade-790037 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7900372 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-790037 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-790037 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.992676915s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-09 20:07:29.474574956 +0000 UTC m=+4860.464347872
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-790037 -n kubernetes-upgrade-790037
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-790037 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-790037 logs -n 25: (1.63147407s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-665212 sudo cat             | cilium-665212             | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC |                     |
	|         | /etc/containerd/config.toml           |                           |         |         |                     |                     |
	| ssh     | -p cilium-665212 sudo                 | cilium-665212             | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-665212 sudo                 | cilium-665212             | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-665212 sudo                 | cilium-665212             | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-665212 sudo find            | cilium-665212             | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-665212 sudo crio            | cilium-665212             | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-665212                      | cilium-665212             | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC | 09 Oct 24 20:05 UTC |
	| start   | -p NoKubernetes-615869                | NoKubernetes-615869       | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC |                     |
	|         | --no-kubernetes                       |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20             |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-615869                | NoKubernetes-615869       | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC | 09 Oct 24 20:06 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-790037          | kubernetes-upgrade-790037 | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC | 09 Oct 24 20:05 UTC |
	| start   | -p kubernetes-upgrade-790037          | kubernetes-upgrade-790037 | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC | 09 Oct 24 20:06 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-744883 ssh               | cert-options-744883       | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC | 09 Oct 24 20:05 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-744883 -- sudo        | cert-options-744883       | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC | 09 Oct 24 20:05 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-744883                | cert-options-744883       | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC | 09 Oct 24 20:05 UTC |
	| start   | -p force-systemd-env-876990           | force-systemd-env-876990  | jenkins | v1.34.0 | 09 Oct 24 20:05 UTC | 09 Oct 24 20:06 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-615869                | NoKubernetes-615869       | jenkins | v1.34.0 | 09 Oct 24 20:06 UTC | 09 Oct 24 20:06 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-790037          | kubernetes-upgrade-790037 | jenkins | v1.34.0 | 09 Oct 24 20:06 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-790037          | kubernetes-upgrade-790037 | jenkins | v1.34.0 | 09 Oct 24 20:06 UTC | 09 Oct 24 20:07 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-615869                | NoKubernetes-615869       | jenkins | v1.34.0 | 09 Oct 24 20:06 UTC | 09 Oct 24 20:06 UTC |
	| start   | -p NoKubernetes-615869                | NoKubernetes-615869       | jenkins | v1.34.0 | 09 Oct 24 20:06 UTC | 09 Oct 24 20:07 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-876990           | force-systemd-env-876990  | jenkins | v1.34.0 | 09 Oct 24 20:06 UTC | 09 Oct 24 20:06 UTC |
	| start   | -p old-k8s-version-169021             | old-k8s-version-169021    | jenkins | v1.34.0 | 09 Oct 24 20:06 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-615869 sudo           | NoKubernetes-615869       | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-615869                | NoKubernetes-615869       | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:07 UTC |
	| start   | -p NoKubernetes-615869                | NoKubernetes-615869       | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:07:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:07:29.073034   60652 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:07:29.073280   60652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:07:29.073284   60652 out.go:358] Setting ErrFile to fd 2...
	I1009 20:07:29.073287   60652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:07:29.073447   60652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:07:29.073999   60652 out.go:352] Setting JSON to false
	I1009 20:07:29.074927   60652 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6590,"bootTime":1728497859,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:07:29.075011   60652 start.go:139] virtualization: kvm guest
	I1009 20:07:29.076576   60652 out.go:177] * [NoKubernetes-615869] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:07:29.077990   60652 notify.go:220] Checking for updates...
	I1009 20:07:29.078007   60652 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:07:29.079264   60652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:07:29.080694   60652 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:07:29.082539   60652 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:07:29.083903   60652 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:07:29.084792   60652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:07:29.086502   60652 config.go:182] Loaded profile config "NoKubernetes-615869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 20:07:29.086935   60652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:07:29.086974   60652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:07:29.104104   60652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42157
	I1009 20:07:29.104585   60652 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:07:29.105198   60652 main.go:141] libmachine: Using API Version  1
	I1009 20:07:29.105215   60652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:07:29.105529   60652 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:07:29.105717   60652 main.go:141] libmachine: (NoKubernetes-615869) Calling .DriverName
	I1009 20:07:29.105938   60652 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1009 20:07:29.105962   60652 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:07:29.106386   60652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:07:29.106426   60652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:07:29.122541   60652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41331
	I1009 20:07:29.123039   60652 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:07:29.123547   60652 main.go:141] libmachine: Using API Version  1
	I1009 20:07:29.123568   60652 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:07:29.123961   60652 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:07:29.124284   60652 main.go:141] libmachine: (NoKubernetes-615869) Calling .DriverName
	I1009 20:07:29.163929   60652 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:07:29.164957   60652 start.go:297] selected driver: kvm2
	I1009 20:07:29.164965   60652 start.go:901] validating driver "kvm2" against &{Name:NoKubernetes-615869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v0.0.0 ClusterName:NoKubernetes-615869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:07:29.165062   60652 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:07:29.165366   60652 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:07:29.165433   60652 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:07:29.182849   60652 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:07:29.183553   60652 cni.go:84] Creating CNI manager for ""
	I1009 20:07:29.183598   60652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:07:29.183649   60652 start.go:340] cluster config:
	{Name:NoKubernetes-615869 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-615869 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.28 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPa
th: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:07:29.183749   60652 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:07:29.185814   60652 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-615869
	I1009 20:07:28.431436   59620 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:07:28.431458   59620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:07:28.431484   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:07:28.434724   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:07:28.435156   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:05:59 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:07:28.435181   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:07:28.435412   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:07:28.435606   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:07:28.435744   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:07:28.435886   59620 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa Username:docker}
	I1009 20:07:28.445124   59620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43307
	I1009 20:07:28.445556   59620 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:07:28.446000   59620 main.go:141] libmachine: Using API Version  1
	I1009 20:07:28.446016   59620 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:07:28.446371   59620 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:07:28.446625   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetState
	I1009 20:07:28.448267   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .DriverName
	I1009 20:07:28.448466   59620 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:07:28.448483   59620 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:07:28.448496   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHHostname
	I1009 20:07:28.451007   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:07:28.451503   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:52:fc", ip: ""} in network mk-kubernetes-upgrade-790037: {Iface:virbr1 ExpiryTime:2024-10-09 21:05:59 +0000 UTC Type:0 Mac:52:54:00:84:52:fc Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:kubernetes-upgrade-790037 Clientid:01:52:54:00:84:52:fc}
	I1009 20:07:28.451539   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | domain kubernetes-upgrade-790037 has defined IP address 192.168.39.62 and MAC address 52:54:00:84:52:fc in network mk-kubernetes-upgrade-790037
	I1009 20:07:28.451664   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHPort
	I1009 20:07:28.451826   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHKeyPath
	I1009 20:07:28.451961   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .GetSSHUsername
	I1009 20:07:28.452108   59620 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/kubernetes-upgrade-790037/id_rsa Username:docker}
	I1009 20:07:28.618340   59620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:07:28.644310   59620 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:07:28.644391   59620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:07:28.668903   59620 api_server.go:72] duration metric: took 277.928663ms to wait for apiserver process to appear ...
	I1009 20:07:28.668989   59620 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:07:28.669025   59620 api_server.go:253] Checking apiserver healthz at https://192.168.39.62:8443/healthz ...
	I1009 20:07:28.679548   59620 api_server.go:279] https://192.168.39.62:8443/healthz returned 200:
	ok
	I1009 20:07:28.680770   59620 api_server.go:141] control plane version: v1.31.1
	I1009 20:07:28.680790   59620 api_server.go:131] duration metric: took 11.781621ms to wait for apiserver health ...
	I1009 20:07:28.680799   59620 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:07:28.688516   59620 system_pods.go:59] 8 kube-system pods found
	I1009 20:07:28.688548   59620 system_pods.go:61] "coredns-7c65d6cfc9-6sps9" [87dbc18e-d933-4b82-bb5c-9cd78dade3b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:07:28.688557   59620 system_pods.go:61] "coredns-7c65d6cfc9-pcrj4" [ee0913e4-0483-4723-8898-6c2a300ec01a] Running
	I1009 20:07:28.688569   59620 system_pods.go:61] "etcd-kubernetes-upgrade-790037" [a56cd1f1-85dc-4caa-8fa9-989c1c027fff] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:07:28.688583   59620 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-790037" [6725cb8b-482c-4ccd-87a3-7124d374df1a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:07:28.688598   59620 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-790037" [184ce60b-c1a8-4f7d-85ce-42a05f443eb8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:07:28.688608   59620 system_pods.go:61] "kube-proxy-kxhnx" [607f915a-f088-42a6-8b02-1ab3ba9e041e] Running
	I1009 20:07:28.688620   59620 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-790037" [3975c52d-001f-4f18-9dae-f0498fdfc842] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:07:28.688628   59620 system_pods.go:61] "storage-provisioner" [704a9e35-9fc4-4195-b97b-871316905aa9] Running
	I1009 20:07:28.688636   59620 system_pods.go:74] duration metric: took 7.829663ms to wait for pod list to return data ...
	I1009 20:07:28.688650   59620 kubeadm.go:582] duration metric: took 297.679827ms to wait for: map[apiserver:true system_pods:true]
	I1009 20:07:28.688666   59620 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:07:28.690829   59620 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:07:28.690850   59620 node_conditions.go:123] node cpu capacity is 2
	I1009 20:07:28.690860   59620 node_conditions.go:105] duration metric: took 2.18922ms to run NodePressure ...
	I1009 20:07:28.690873   59620 start.go:241] waiting for startup goroutines ...
	I1009 20:07:28.719320   59620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:07:28.734049   59620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:07:29.400396   59620 main.go:141] libmachine: Making call to close driver server
	I1009 20:07:29.400428   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .Close
	I1009 20:07:29.400429   59620 main.go:141] libmachine: Making call to close driver server
	I1009 20:07:29.400442   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .Close
	I1009 20:07:29.400705   59620 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:07:29.400721   59620 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:07:29.400729   59620 main.go:141] libmachine: Making call to close driver server
	I1009 20:07:29.400737   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .Close
	I1009 20:07:29.400870   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Closing plugin on server side
	I1009 20:07:29.400929   59620 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:07:29.400947   59620 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:07:29.400956   59620 main.go:141] libmachine: Making call to close driver server
	I1009 20:07:29.400964   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .Close
	I1009 20:07:29.401047   59620 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:07:29.401064   59620 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:07:29.401318   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) DBG | Closing plugin on server side
	I1009 20:07:29.401324   59620 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:07:29.401339   59620 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:07:29.406940   59620 main.go:141] libmachine: Making call to close driver server
	I1009 20:07:29.406959   59620 main.go:141] libmachine: (kubernetes-upgrade-790037) Calling .Close
	I1009 20:07:29.407246   59620 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:07:29.407262   59620 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:07:29.409223   59620 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1009 20:07:29.410448   59620 addons.go:510] duration metric: took 1.019422449s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1009 20:07:29.410493   59620 start.go:246] waiting for cluster config update ...
	I1009 20:07:29.410506   59620 start.go:255] writing updated cluster config ...
	I1009 20:07:29.410696   59620 ssh_runner.go:195] Run: rm -f paused
	I1009 20:07:29.459828   59620 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:07:29.461563   59620 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-790037" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.160459161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504450160438092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8807ffde-7b53-4302-b026-31938502fed9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.160960203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d53c5b34-0890-4884-b158-c641a1778f80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.161030619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d53c5b34-0890-4884-b158-c641a1778f80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.161444706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4da2410dca21c5da3e3b0749a97d881c8a1944214f4d7911c9559069bed9308,PodSandboxId:9bb14a701316337510af845288d8447a9b00edfa3e89f29fa94f92f0ab4b136a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728504446754493752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 704a9e35-9fc4-4195-b97b-871316905aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f126ddc16ebf7d40b04aea10d8acaacd70adf32276b47281dacba650103841,PodSandboxId:ccfcd348853c18056695ae6a023966d0957178af05cb942fd2950cf0dda72b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504446745527728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6sps9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dbc18e-d933-4b82-bb5c-9cd78dade3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98beb299e399b6794203c4e5f4ea0244df7594eaf1a714fe018613f683cfe1ae,PodSandboxId:34fac7dce5156014b5d0b0b60ba404cb7fe9ae1fe7b5cdc849cbd5c30c0c1420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504442943194489,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-790037,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 5d11c1330580c8c7bb9c80f865a41e57,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71dd21a6af1eaa8506c77d28db00e31928e9b6cb42e8d73f1752a033d63ae6be,PodSandboxId:1ee80e5c82a4e4e3616031049199fe883c6bc4bc5948158bb917b99e5557fb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504442927699925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 98489cf1622c458339ec4d1f6918db59,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dba2d2112a6f58c82c092e51c362b3a7ed3dcd8357a5a5e990c52f205be21ac,PodSandboxId:e69d64093acb58660cb22939a0abfbf88d8906241246e3fd12c28067606504ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504442937768670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-790037,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c5f2f4cc36e04236f2e680bb8b3a51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7175f8d685198009b7c2b24b3b0e9637302af36f0cb87a84da7ead8d79254e6,PodSandboxId:9459a8064b2bf7aed82729c25fdac75cd4c0abc748cc25ce7798a27ce2df87e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504442903549851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 52a42c13f818c813fec5e784d3c3d17a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b99529194afa147ddbae8c80ab82d407acddde4795e70120e0e0dea507ae05ff,PodSandboxId:181dd1f33cc2836a085caf915b78965137211518c0e6c718c28e31a5ac1c5472,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504440314986327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pcrj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee0913e4-0483-47
23-8898-6c2a300ec01a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0016f0adefdd8defee57e1be9f36f7817c1b2352aeb67223e7503c6724725f,PodSandboxId:69e96fb40841db2bac5ff1e6f5289065a0cbe5029d4bd99680652d57d037a4a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17285044182511
98752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxhnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607f915a-f088-42a6-8b02-1ab3ba9e041e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035f7c2e523929ae5205cf27d81ab69e2fda91711e0c355733c0e7442203a0be,PodSandboxId:181dd1f33cc2836a085caf915b78965137211518c0e6c718c28e31a5ac1c5472,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504419010202653,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pcrj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee0913e4-0483-4723-8898-6c2a300ec01a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6751af980fecda6a6f0e33b0f662360609d5dc762a1cb0a87943c1b1c217f5e,PodSandboxId:ccfcd348853c18056695ae6a023966d0957178af05cb942fd2950cf0dda72b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504418847274347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6sps9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dbc18e-d933-4b82-bb5c-9cd78dade3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9307582c12bfbcca527bd1a2e76a7529fabb8c8155e24753a750359e69e6aa,PodSandboxId:9459a8064b2bf7aed82729c25fdac75cd4c0abc748cc25
ce7798a27ce2df87e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504418134249347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a42c13f818c813fec5e784d3c3d17a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f20b1387babd69221ac617c35246cd3d1279f26ed5c5d4a1cc9a5576c9ba62cf,PodSandboxId:9bb14a701316337510af845288d8447a9b00edfa3e89f29fa94f92f0ab4b136a,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728504418013331652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 704a9e35-9fc4-4195-b97b-871316905aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5971ccadba8b548a1ff0b6dae1a7758dbffd05e5cd70c260a03d7ce40c6ce87,PodSandboxId:34fac7dce5156014b5d0b0b60ba404cb7fe9ae1fe7b5cdc849cbd5c30c0c1420,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504418207372189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d11c1330580c8c7bb9c80f865a41e57,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5e711ee7f8410288aef4a0c74aa87a8a7b3c1ef5a1f8a65318181437da1ff4,PodSandboxId:e69d64093acb58660cb22939a0abfbf88d8906241246e3fd12c28067606504ac,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504417981628302,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c5f2f4cc36e04236f2e680bb8b3a51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad7c8cf4f9ac7b540dc21ff970bfc394d4c05a4d972adc8a1633d7c61c94760,PodSandboxId:1ee80e5c82a4e4e3616031049199fe883c6bc4bc5948158bb917b99e5557fb9b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504417753148572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98489cf1622c458339ec4d1f6918db59,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3660c79466a817658d79f78e4a83b4bb821be87e9f5ee4b532ec97e991c719c4,PodSandboxId:75cdf9a6351aaa0f6146fa2a2f70f88e2fdb5e1d8aaf060c7d4a143e37a34f8a,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504388540392655,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxhnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607f915a-f088-42a6-8b02-1ab3ba9e041e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d53c5b34-0890-4884-b158-c641a1778f80 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.205316855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d80ea9a2-f6ff-4978-bbe5-8a918822aa03 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.205415064Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d80ea9a2-f6ff-4978-bbe5-8a918822aa03 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.206569597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3c3ad6a4-0ebc-4461-851c-a7507c838dc4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.206974519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504450206953387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c3ad6a4-0ebc-4461-851c-a7507c838dc4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.207901678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52de4e3c-d50b-479e-b08d-8288db3bdaf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.207968554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52de4e3c-d50b-479e-b08d-8288db3bdaf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.208317833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4da2410dca21c5da3e3b0749a97d881c8a1944214f4d7911c9559069bed9308,PodSandboxId:9bb14a701316337510af845288d8447a9b00edfa3e89f29fa94f92f0ab4b136a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728504446754493752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 704a9e35-9fc4-4195-b97b-871316905aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f126ddc16ebf7d40b04aea10d8acaacd70adf32276b47281dacba650103841,PodSandboxId:ccfcd348853c18056695ae6a023966d0957178af05cb942fd2950cf0dda72b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504446745527728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6sps9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dbc18e-d933-4b82-bb5c-9cd78dade3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98beb299e399b6794203c4e5f4ea0244df7594eaf1a714fe018613f683cfe1ae,PodSandboxId:34fac7dce5156014b5d0b0b60ba404cb7fe9ae1fe7b5cdc849cbd5c30c0c1420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504442943194489,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-790037,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 5d11c1330580c8c7bb9c80f865a41e57,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71dd21a6af1eaa8506c77d28db00e31928e9b6cb42e8d73f1752a033d63ae6be,PodSandboxId:1ee80e5c82a4e4e3616031049199fe883c6bc4bc5948158bb917b99e5557fb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504442927699925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 98489cf1622c458339ec4d1f6918db59,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dba2d2112a6f58c82c092e51c362b3a7ed3dcd8357a5a5e990c52f205be21ac,PodSandboxId:e69d64093acb58660cb22939a0abfbf88d8906241246e3fd12c28067606504ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504442937768670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-790037,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c5f2f4cc36e04236f2e680bb8b3a51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7175f8d685198009b7c2b24b3b0e9637302af36f0cb87a84da7ead8d79254e6,PodSandboxId:9459a8064b2bf7aed82729c25fdac75cd4c0abc748cc25ce7798a27ce2df87e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504442903549851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 52a42c13f818c813fec5e784d3c3d17a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b99529194afa147ddbae8c80ab82d407acddde4795e70120e0e0dea507ae05ff,PodSandboxId:181dd1f33cc2836a085caf915b78965137211518c0e6c718c28e31a5ac1c5472,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504440314986327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pcrj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee0913e4-0483-47
23-8898-6c2a300ec01a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0016f0adefdd8defee57e1be9f36f7817c1b2352aeb67223e7503c6724725f,PodSandboxId:69e96fb40841db2bac5ff1e6f5289065a0cbe5029d4bd99680652d57d037a4a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17285044182511
98752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxhnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607f915a-f088-42a6-8b02-1ab3ba9e041e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035f7c2e523929ae5205cf27d81ab69e2fda91711e0c355733c0e7442203a0be,PodSandboxId:181dd1f33cc2836a085caf915b78965137211518c0e6c718c28e31a5ac1c5472,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504419010202653,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pcrj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee0913e4-0483-4723-8898-6c2a300ec01a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6751af980fecda6a6f0e33b0f662360609d5dc762a1cb0a87943c1b1c217f5e,PodSandboxId:ccfcd348853c18056695ae6a023966d0957178af05cb942fd2950cf0dda72b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504418847274347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6sps9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dbc18e-d933-4b82-bb5c-9cd78dade3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9307582c12bfbcca527bd1a2e76a7529fabb8c8155e24753a750359e69e6aa,PodSandboxId:9459a8064b2bf7aed82729c25fdac75cd4c0abc748cc25
ce7798a27ce2df87e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504418134249347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a42c13f818c813fec5e784d3c3d17a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f20b1387babd69221ac617c35246cd3d1279f26ed5c5d4a1cc9a5576c9ba62cf,PodSandboxId:9bb14a701316337510af845288d8447a9b00edfa3e89f29fa94f92f0ab4b136a,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728504418013331652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 704a9e35-9fc4-4195-b97b-871316905aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5971ccadba8b548a1ff0b6dae1a7758dbffd05e5cd70c260a03d7ce40c6ce87,PodSandboxId:34fac7dce5156014b5d0b0b60ba404cb7fe9ae1fe7b5cdc849cbd5c30c0c1420,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504418207372189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d11c1330580c8c7bb9c80f865a41e57,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5e711ee7f8410288aef4a0c74aa87a8a7b3c1ef5a1f8a65318181437da1ff4,PodSandboxId:e69d64093acb58660cb22939a0abfbf88d8906241246e3fd12c28067606504ac,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504417981628302,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c5f2f4cc36e04236f2e680bb8b3a51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad7c8cf4f9ac7b540dc21ff970bfc394d4c05a4d972adc8a1633d7c61c94760,PodSandboxId:1ee80e5c82a4e4e3616031049199fe883c6bc4bc5948158bb917b99e5557fb9b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504417753148572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98489cf1622c458339ec4d1f6918db59,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3660c79466a817658d79f78e4a83b4bb821be87e9f5ee4b532ec97e991c719c4,PodSandboxId:75cdf9a6351aaa0f6146fa2a2f70f88e2fdb5e1d8aaf060c7d4a143e37a34f8a,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504388540392655,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxhnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607f915a-f088-42a6-8b02-1ab3ba9e041e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52de4e3c-d50b-479e-b08d-8288db3bdaf4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.253036245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c531106-1aaa-471d-a233-a971954857f5 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.253125608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c531106-1aaa-471d-a233-a971954857f5 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.254292559Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb2806e4-8425-4a31-8034-11dbda93aa30 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.254932244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504450254906669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb2806e4-8425-4a31-8034-11dbda93aa30 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.256274873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9eecd789-61ff-4681-be7a-10162c25d28d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.256331126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9eecd789-61ff-4681-be7a-10162c25d28d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.256721928Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4da2410dca21c5da3e3b0749a97d881c8a1944214f4d7911c9559069bed9308,PodSandboxId:9bb14a701316337510af845288d8447a9b00edfa3e89f29fa94f92f0ab4b136a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728504446754493752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 704a9e35-9fc4-4195-b97b-871316905aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f126ddc16ebf7d40b04aea10d8acaacd70adf32276b47281dacba650103841,PodSandboxId:ccfcd348853c18056695ae6a023966d0957178af05cb942fd2950cf0dda72b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504446745527728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6sps9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dbc18e-d933-4b82-bb5c-9cd78dade3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98beb299e399b6794203c4e5f4ea0244df7594eaf1a714fe018613f683cfe1ae,PodSandboxId:34fac7dce5156014b5d0b0b60ba404cb7fe9ae1fe7b5cdc849cbd5c30c0c1420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504442943194489,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-790037,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 5d11c1330580c8c7bb9c80f865a41e57,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71dd21a6af1eaa8506c77d28db00e31928e9b6cb42e8d73f1752a033d63ae6be,PodSandboxId:1ee80e5c82a4e4e3616031049199fe883c6bc4bc5948158bb917b99e5557fb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504442927699925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 98489cf1622c458339ec4d1f6918db59,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dba2d2112a6f58c82c092e51c362b3a7ed3dcd8357a5a5e990c52f205be21ac,PodSandboxId:e69d64093acb58660cb22939a0abfbf88d8906241246e3fd12c28067606504ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504442937768670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-790037,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c5f2f4cc36e04236f2e680bb8b3a51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7175f8d685198009b7c2b24b3b0e9637302af36f0cb87a84da7ead8d79254e6,PodSandboxId:9459a8064b2bf7aed82729c25fdac75cd4c0abc748cc25ce7798a27ce2df87e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504442903549851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 52a42c13f818c813fec5e784d3c3d17a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b99529194afa147ddbae8c80ab82d407acddde4795e70120e0e0dea507ae05ff,PodSandboxId:181dd1f33cc2836a085caf915b78965137211518c0e6c718c28e31a5ac1c5472,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504440314986327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pcrj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee0913e4-0483-47
23-8898-6c2a300ec01a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0016f0adefdd8defee57e1be9f36f7817c1b2352aeb67223e7503c6724725f,PodSandboxId:69e96fb40841db2bac5ff1e6f5289065a0cbe5029d4bd99680652d57d037a4a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17285044182511
98752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxhnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607f915a-f088-42a6-8b02-1ab3ba9e041e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035f7c2e523929ae5205cf27d81ab69e2fda91711e0c355733c0e7442203a0be,PodSandboxId:181dd1f33cc2836a085caf915b78965137211518c0e6c718c28e31a5ac1c5472,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504419010202653,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pcrj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee0913e4-0483-4723-8898-6c2a300ec01a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6751af980fecda6a6f0e33b0f662360609d5dc762a1cb0a87943c1b1c217f5e,PodSandboxId:ccfcd348853c18056695ae6a023966d0957178af05cb942fd2950cf0dda72b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504418847274347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6sps9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dbc18e-d933-4b82-bb5c-9cd78dade3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9307582c12bfbcca527bd1a2e76a7529fabb8c8155e24753a750359e69e6aa,PodSandboxId:9459a8064b2bf7aed82729c25fdac75cd4c0abc748cc25
ce7798a27ce2df87e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504418134249347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a42c13f818c813fec5e784d3c3d17a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f20b1387babd69221ac617c35246cd3d1279f26ed5c5d4a1cc9a5576c9ba62cf,PodSandboxId:9bb14a701316337510af845288d8447a9b00edfa3e89f29fa94f92f0ab4b136a,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728504418013331652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 704a9e35-9fc4-4195-b97b-871316905aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5971ccadba8b548a1ff0b6dae1a7758dbffd05e5cd70c260a03d7ce40c6ce87,PodSandboxId:34fac7dce5156014b5d0b0b60ba404cb7fe9ae1fe7b5cdc849cbd5c30c0c1420,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504418207372189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d11c1330580c8c7bb9c80f865a41e57,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5e711ee7f8410288aef4a0c74aa87a8a7b3c1ef5a1f8a65318181437da1ff4,PodSandboxId:e69d64093acb58660cb22939a0abfbf88d8906241246e3fd12c28067606504ac,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504417981628302,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c5f2f4cc36e04236f2e680bb8b3a51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad7c8cf4f9ac7b540dc21ff970bfc394d4c05a4d972adc8a1633d7c61c94760,PodSandboxId:1ee80e5c82a4e4e3616031049199fe883c6bc4bc5948158bb917b99e5557fb9b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504417753148572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98489cf1622c458339ec4d1f6918db59,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3660c79466a817658d79f78e4a83b4bb821be87e9f5ee4b532ec97e991c719c4,PodSandboxId:75cdf9a6351aaa0f6146fa2a2f70f88e2fdb5e1d8aaf060c7d4a143e37a34f8a,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504388540392655,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxhnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607f915a-f088-42a6-8b02-1ab3ba9e041e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9eecd789-61ff-4681-be7a-10162c25d28d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.293733791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=653fbda0-33ca-46a8-b3eb-274ee903d284 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.293871998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=653fbda0-33ca-46a8-b3eb-274ee903d284 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.296104161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec9d867a-f760-42f2-9916-aeb55487f160 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.296491323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504450296459476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec9d867a-f760-42f2-9916-aeb55487f160 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.297324932Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd312b0f-f3c3-4850-9438-a2a5a4213206 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.297405419Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd312b0f-f3c3-4850-9438-a2a5a4213206 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:07:30 kubernetes-upgrade-790037 crio[2261]: time="2024-10-09 20:07:30.297790610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a4da2410dca21c5da3e3b0749a97d881c8a1944214f4d7911c9559069bed9308,PodSandboxId:9bb14a701316337510af845288d8447a9b00edfa3e89f29fa94f92f0ab4b136a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728504446754493752,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 704a9e35-9fc4-4195-b97b-871316905aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2f126ddc16ebf7d40b04aea10d8acaacd70adf32276b47281dacba650103841,PodSandboxId:ccfcd348853c18056695ae6a023966d0957178af05cb942fd2950cf0dda72b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504446745527728,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6sps9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dbc18e-d933-4b82-bb5c-9cd78dade3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protoco
l\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:98beb299e399b6794203c4e5f4ea0244df7594eaf1a714fe018613f683cfe1ae,PodSandboxId:34fac7dce5156014b5d0b0b60ba404cb7fe9ae1fe7b5cdc849cbd5c30c0c1420,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504442943194489,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-790037,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 5d11c1330580c8c7bb9c80f865a41e57,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71dd21a6af1eaa8506c77d28db00e31928e9b6cb42e8d73f1752a033d63ae6be,PodSandboxId:1ee80e5c82a4e4e3616031049199fe883c6bc4bc5948158bb917b99e5557fb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504442927699925,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 98489cf1622c458339ec4d1f6918db59,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9dba2d2112a6f58c82c092e51c362b3a7ed3dcd8357a5a5e990c52f205be21ac,PodSandboxId:e69d64093acb58660cb22939a0abfbf88d8906241246e3fd12c28067606504ac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504442937768670,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-790037,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c5f2f4cc36e04236f2e680bb8b3a51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7175f8d685198009b7c2b24b3b0e9637302af36f0cb87a84da7ead8d79254e6,PodSandboxId:9459a8064b2bf7aed82729c25fdac75cd4c0abc748cc25ce7798a27ce2df87e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504442903549851,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 52a42c13f818c813fec5e784d3c3d17a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b99529194afa147ddbae8c80ab82d407acddde4795e70120e0e0dea507ae05ff,PodSandboxId:181dd1f33cc2836a085caf915b78965137211518c0e6c718c28e31a5ac1c5472,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504440314986327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pcrj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee0913e4-0483-47
23-8898-6c2a300ec01a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b0016f0adefdd8defee57e1be9f36f7817c1b2352aeb67223e7503c6724725f,PodSandboxId:69e96fb40841db2bac5ff1e6f5289065a0cbe5029d4bd99680652d57d037a4a3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17285044182511
98752,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxhnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607f915a-f088-42a6-8b02-1ab3ba9e041e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035f7c2e523929ae5205cf27d81ab69e2fda91711e0c355733c0e7442203a0be,PodSandboxId:181dd1f33cc2836a085caf915b78965137211518c0e6c718c28e31a5ac1c5472,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504419010202653,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-pcrj4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee0913e4-0483-4723-8898-6c2a300ec01a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6751af980fecda6a6f0e33b0f662360609d5dc762a1cb0a87943c1b1c217f5e,PodSandboxId:ccfcd348853c18056695ae6a023966d0957178af05cb942fd2950cf0dda72b2e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504418847274347,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6sps9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87dbc18e-d933-4b82-bb5c-9cd78dade3b4,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9307582c12bfbcca527bd1a2e76a7529fabb8c8155e24753a750359e69e6aa,PodSandboxId:9459a8064b2bf7aed82729c25fdac75cd4c0abc748cc25
ce7798a27ce2df87e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504418134249347,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52a42c13f818c813fec5e784d3c3d17a,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f20b1387babd69221ac617c35246cd3d1279f26ed5c5d4a1cc9a5576c9ba62cf,PodSandboxId:9bb14a701316337510af845288d8447a9b00edfa3e89f29fa94f92f0ab4b136a,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728504418013331652,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 704a9e35-9fc4-4195-b97b-871316905aa9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5971ccadba8b548a1ff0b6dae1a7758dbffd05e5cd70c260a03d7ce40c6ce87,PodSandboxId:34fac7dce5156014b5d0b0b60ba404cb7fe9ae1fe7b5cdc849cbd5c30c0c1420,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504418207372189,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d11c1330580c8c7bb9c80f865a41e57,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee5e711ee7f8410288aef4a0c74aa87a8a7b3c1ef5a1f8a65318181437da1ff4,PodSandboxId:e69d64093acb58660cb22939a0abfbf88d8906241246e3fd12c28067606504ac,Metadata:&ContainerMetadata{Name:kub
e-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504417981628302,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03c5f2f4cc36e04236f2e680bb8b3a51,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad7c8cf4f9ac7b540dc21ff970bfc394d4c05a4d972adc8a1633d7c61c94760,PodSandboxId:1ee80e5c82a4e4e3616031049199fe883c6bc4bc5948158bb917b99e5557fb9b,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504417753148572,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-790037,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98489cf1622c458339ec4d1f6918db59,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3660c79466a817658d79f78e4a83b4bb821be87e9f5ee4b532ec97e991c719c4,PodSandboxId:75cdf9a6351aaa0f6146fa2a2f70f88e2fdb5e1d8aaf060c7d4a143e37a34f8a,Metadata:&ContainerMe
tadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504388540392655,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-kxhnx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 607f915a-f088-42a6-8b02-1ab3ba9e041e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd312b0f-f3c3-4850-9438-a2a5a4213206 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a4da2410dca21       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       2                   9bb14a7013163       storage-provisioner
	f2f126ddc16eb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago        Running             coredns                   2                   ccfcd348853c1       coredns-7c65d6cfc9-6sps9
	98beb299e399b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago        Running             kube-scheduler            2                   34fac7dce5156       kube-scheduler-kubernetes-upgrade-790037
	9dba2d2112a6f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago        Running             kube-controller-manager   2                   e69d64093acb5       kube-controller-manager-kubernetes-upgrade-790037
	71dd21a6af1ea       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago        Running             kube-apiserver            2                   1ee80e5c82a4e       kube-apiserver-kubernetes-upgrade-790037
	a7175f8d68519       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago        Running             etcd                      2                   9459a8064b2bf       etcd-kubernetes-upgrade-790037
	b99529194afa1       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   10 seconds ago       Running             coredns                   2                   181dd1f33cc28       coredns-7c65d6cfc9-pcrj4
	035f7c2e52392       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   31 seconds ago       Exited              coredns                   1                   181dd1f33cc28       coredns-7c65d6cfc9-pcrj4
	f6751af980fec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   31 seconds ago       Exited              coredns                   1                   ccfcd348853c1       coredns-7c65d6cfc9-6sps9
	9b0016f0adefd       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   32 seconds ago       Running             kube-proxy                1                   69e96fb40841d       kube-proxy-kxhnx
	a5971ccadba8b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   32 seconds ago       Exited              kube-scheduler            1                   34fac7dce5156       kube-scheduler-kubernetes-upgrade-790037
	9f9307582c12b       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   32 seconds ago       Exited              etcd                      1                   9459a8064b2bf       etcd-kubernetes-upgrade-790037
	f20b1387babd6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   32 seconds ago       Exited              storage-provisioner       1                   9bb14a7013163       storage-provisioner
	ee5e711ee7f84       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   32 seconds ago       Exited              kube-controller-manager   1                   e69d64093acb5       kube-controller-manager-kubernetes-upgrade-790037
	fad7c8cf4f9ac       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   32 seconds ago       Exited              kube-apiserver            1                   1ee80e5c82a4e       kube-apiserver-kubernetes-upgrade-790037
	3660c79466a81       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   About a minute ago   Exited              kube-proxy                0                   75cdf9a6351aa       kube-proxy-kxhnx
	
	
	==> coredns [035f7c2e523929ae5205cf27d81ab69e2fda91711e0c355733c0e7442203a0be] <==
	
	
	==> coredns [b99529194afa147ddbae8c80ab82d407acddde4795e70120e0e0dea507ae05ff] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	
	
	==> coredns [f2f126ddc16ebf7d40b04aea10d8acaacd70adf32276b47281dacba650103841] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f6751af980fecda6a6f0e33b0f662360609d5dc762a1cb0a87943c1b1c217f5e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-790037
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-790037
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:06:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-790037
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:07:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:07:26 +0000   Wed, 09 Oct 2024 20:06:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:07:26 +0000   Wed, 09 Oct 2024 20:06:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:07:26 +0000   Wed, 09 Oct 2024 20:06:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:07:26 +0000   Wed, 09 Oct 2024 20:06:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.62
	  Hostname:    kubernetes-upgrade-790037
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8db44fa1ca94217b130a3a5f9c54b72
	  System UUID:                c8db44fa-1ca9-4217-b130-a3a5f9c54b72
	  Boot ID:                    9ae638d7-61a9-4b4e-b527-98a5c83a9596
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6sps9                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     63s
	  kube-system                 coredns-7c65d6cfc9-pcrj4                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     63s
	  kube-system                 etcd-kubernetes-upgrade-790037                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         65s
	  kube-system                 kube-apiserver-kubernetes-upgrade-790037             250m (12%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-790037    200m (10%)    0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-proxy-kxhnx                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-scheduler-kubernetes-upgrade-790037             100m (5%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 61s                kube-proxy       
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node kubernetes-upgrade-790037 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node kubernetes-upgrade-790037 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node kubernetes-upgrade-790037 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           63s                node-controller  Node kubernetes-upgrade-790037 event: Registered Node kubernetes-upgrade-790037 in Controller
	  Normal  RegisteredNode           25s                node-controller  Node kubernetes-upgrade-790037 event: Registered Node kubernetes-upgrade-790037 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-790037 event: Registered Node kubernetes-upgrade-790037 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.042128] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +0.064049] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068257] systemd-fstab-generator[567]: Ignoring "noauto" option for root device
	[  +0.205413] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.158423] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.306825] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +4.276972] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +0.061312] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.224675] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +9.576542] systemd-fstab-generator[1232]: Ignoring "noauto" option for root device
	[  +0.090098] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.828991] kauditd_printk_skb: 109 callbacks suppressed
	[ +17.833329] systemd-fstab-generator[2187]: Ignoring "noauto" option for root device
	[  +0.148830] systemd-fstab-generator[2200]: Ignoring "noauto" option for root device
	[  +0.199225] systemd-fstab-generator[2214]: Ignoring "noauto" option for root device
	[  +0.166145] systemd-fstab-generator[2226]: Ignoring "noauto" option for root device
	[  +0.296636] systemd-fstab-generator[2254]: Ignoring "noauto" option for root device
	[  +7.179644] systemd-fstab-generator[2399]: Ignoring "noauto" option for root device
	[  +0.083426] kauditd_printk_skb: 100 callbacks suppressed
	[Oct 9 20:07] kauditd_printk_skb: 121 callbacks suppressed
	[ +16.812090] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.084922] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.159094] systemd-fstab-generator[4010]: Ignoring "noauto" option for root device
	[  +0.124680] kauditd_printk_skb: 43 callbacks suppressed
	
	
	==> etcd [9f9307582c12bfbcca527bd1a2e76a7529fabb8c8155e24753a750359e69e6aa] <==
	{"level":"info","ts":"2024-10-09T20:07:00.264108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-09T20:07:00.264151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 received MsgPreVoteResp from 4cff10f3f970b356 at term 2"}
	{"level":"info","ts":"2024-10-09T20:07:00.264173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became candidate at term 3"}
	{"level":"info","ts":"2024-10-09T20:07:00.264199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 received MsgVoteResp from 4cff10f3f970b356 at term 3"}
	{"level":"info","ts":"2024-10-09T20:07:00.264232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became leader at term 3"}
	{"level":"info","ts":"2024-10-09T20:07:00.264257Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4cff10f3f970b356 elected leader 4cff10f3f970b356 at term 3"}
	{"level":"info","ts":"2024-10-09T20:07:00.268204Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4cff10f3f970b356","local-member-attributes":"{Name:kubernetes-upgrade-790037 ClientURLs:[https://192.168.39.62:2379]}","request-path":"/0/members/4cff10f3f970b356/attributes","cluster-id":"cebe0b560c7f0a8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:07:00.268371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:07:00.269432Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:07:00.272406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.62:2379"}
	{"level":"info","ts":"2024-10-09T20:07:00.272804Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:07:00.273757Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:07:00.276575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T20:07:00.279069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:07:00.279115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:07:09.734521Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-09T20:07:09.734571Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-790037","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.62:2380"],"advertise-client-urls":["https://192.168.39.62:2379"]}
	{"level":"warn","ts":"2024-10-09T20:07:09.734648Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:07:09.734734Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:07:09.764492Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.62:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:07:09.764556Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.62:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-09T20:07:09.764599Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"4cff10f3f970b356","current-leader-member-id":"4cff10f3f970b356"}
	{"level":"info","ts":"2024-10-09T20:07:09.767695Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.62:2380"}
	{"level":"info","ts":"2024-10-09T20:07:09.767781Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.62:2380"}
	{"level":"info","ts":"2024-10-09T20:07:09.767877Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-790037","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.62:2380"],"advertise-client-urls":["https://192.168.39.62:2379"]}
	
	
	==> etcd [a7175f8d685198009b7c2b24b3b0e9637302af36f0cb87a84da7ead8d79254e6] <==
	{"level":"info","ts":"2024-10-09T20:07:23.230961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 switched to configuration voters=(5548171905991750486)"}
	{"level":"info","ts":"2024-10-09T20:07:23.231052Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cebe0b560c7f0a8","local-member-id":"4cff10f3f970b356","added-peer-id":"4cff10f3f970b356","added-peer-peer-urls":["https://192.168.39.62:2380"]}
	{"level":"info","ts":"2024-10-09T20:07:23.231143Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cebe0b560c7f0a8","local-member-id":"4cff10f3f970b356","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:07:23.231169Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:07:23.253734Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-09T20:07:23.254034Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"4cff10f3f970b356","initial-advertise-peer-urls":["https://192.168.39.62:2380"],"listen-peer-urls":["https://192.168.39.62:2380"],"advertise-client-urls":["https://192.168.39.62:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.62:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-09T20:07:23.254079Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-09T20:07:23.254171Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.62:2380"}
	{"level":"info","ts":"2024-10-09T20:07:23.254203Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.62:2380"}
	{"level":"info","ts":"2024-10-09T20:07:25.059322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-09T20:07:25.059431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-09T20:07:25.059467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 received MsgPreVoteResp from 4cff10f3f970b356 at term 3"}
	{"level":"info","ts":"2024-10-09T20:07:25.059505Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became candidate at term 4"}
	{"level":"info","ts":"2024-10-09T20:07:25.059529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 received MsgVoteResp from 4cff10f3f970b356 at term 4"}
	{"level":"info","ts":"2024-10-09T20:07:25.059555Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4cff10f3f970b356 became leader at term 4"}
	{"level":"info","ts":"2024-10-09T20:07:25.059592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4cff10f3f970b356 elected leader 4cff10f3f970b356 at term 4"}
	{"level":"info","ts":"2024-10-09T20:07:25.062248Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"4cff10f3f970b356","local-member-attributes":"{Name:kubernetes-upgrade-790037 ClientURLs:[https://192.168.39.62:2379]}","request-path":"/0/members/4cff10f3f970b356/attributes","cluster-id":"cebe0b560c7f0a8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:07:25.062484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:07:25.062635Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:07:25.063606Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:07:25.063732Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:07:25.063765Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:07:25.064446Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:07:25.064663Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T20:07:25.065317Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.62:2379"}
	
	
	==> kernel <==
	 20:07:30 up 1 min,  0 users,  load average: 0.60, 0.22, 0.08
	Linux kubernetes-upgrade-790037 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [71dd21a6af1eaa8506c77d28db00e31928e9b6cb42e8d73f1752a033d63ae6be] <==
	I1009 20:07:26.355122       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:07:26.355329       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:07:26.355380       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:07:26.384018       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1009 20:07:26.384179       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 20:07:26.384336       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:07:26.384425       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:07:26.384450       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:07:26.384482       1 shared_informer.go:320] Caches are synced for configmaps
	I1009 20:07:26.384495       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:07:26.390946       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E1009 20:07:26.398636       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:07:26.445820       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1009 20:07:26.455930       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:07:26.466032       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1009 20:07:26.466102       1 policy_source.go:224] refreshing policies
	I1009 20:07:26.466468       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 20:07:27.283793       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:07:28.149125       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 20:07:28.162623       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 20:07:28.202633       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 20:07:28.324728       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:07:28.331430       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:07:29.264690       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 20:07:29.736791       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [fad7c8cf4f9ac7b540dc21ff970bfc394d4c05a4d972adc8a1633d7c61c94760] <==
	W1009 20:07:19.251223       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.255605       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.257108       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.321553       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.332018       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.333402       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.333513       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.341958       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.369428       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.380140       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.402666       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.427933       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.437886       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.500939       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.553643       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.574352       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.611299       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.630443       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.636030       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.643547       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.672707       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.681390       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.728111       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.823083       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:07:19.874202       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [9dba2d2112a6f58c82c092e51c362b3a7ed3dcd8357a5a5e990c52f205be21ac] <==
	I1009 20:07:29.717560       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1009 20:07:29.722584       1 shared_informer.go:320] Caches are synced for endpoint
	I1009 20:07:29.722656       1 shared_informer.go:320] Caches are synced for taint
	I1009 20:07:29.722741       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 20:07:29.722912       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-790037"
	I1009 20:07:29.722962       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 20:07:29.729597       1 shared_informer.go:320] Caches are synced for attach detach
	I1009 20:07:29.729666       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1009 20:07:29.729713       1 shared_informer.go:320] Caches are synced for TTL
	I1009 20:07:29.729878       1 shared_informer.go:320] Caches are synced for GC
	I1009 20:07:29.729957       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1009 20:07:29.729603       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1009 20:07:29.732543       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1009 20:07:29.732927       1 shared_informer.go:320] Caches are synced for ephemeral
	I1009 20:07:29.736995       1 shared_informer.go:320] Caches are synced for job
	I1009 20:07:29.742312       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1009 20:07:29.746678       1 shared_informer.go:320] Caches are synced for HPA
	I1009 20:07:29.935876       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 20:07:29.939496       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 20:07:29.951497       1 shared_informer.go:320] Caches are synced for disruption
	I1009 20:07:30.144453       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="411.816065ms"
	I1009 20:07:30.144548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="60.586µs"
	I1009 20:07:30.382485       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 20:07:30.429914       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 20:07:30.429938       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [ee5e711ee7f8410288aef4a0c74aa87a8a7b3c1ef5a1f8a65318181437da1ff4] <==
	I1009 20:07:05.162799       1 shared_informer.go:320] Caches are synced for GC
	I1009 20:07:05.162897       1 shared_informer.go:320] Caches are synced for service account
	I1009 20:07:05.162949       1 shared_informer.go:320] Caches are synced for cronjob
	I1009 20:07:05.162973       1 shared_informer.go:320] Caches are synced for crt configmap
	I1009 20:07:05.162955       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1009 20:07:05.162964       1 shared_informer.go:320] Caches are synced for disruption
	I1009 20:07:05.162902       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1009 20:07:05.166887       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1009 20:07:05.169049       1 shared_informer.go:320] Caches are synced for PV protection
	I1009 20:07:05.173812       1 shared_informer.go:320] Caches are synced for endpoint
	I1009 20:07:05.211515       1 shared_informer.go:320] Caches are synced for job
	I1009 20:07:05.253036       1 shared_informer.go:320] Caches are synced for PVC protection
	I1009 20:07:05.260819       1 shared_informer.go:320] Caches are synced for attach detach
	I1009 20:07:05.270425       1 shared_informer.go:320] Caches are synced for ephemeral
	I1009 20:07:05.311969       1 shared_informer.go:320] Caches are synced for persistent volume
	I1009 20:07:05.312196       1 shared_informer.go:320] Caches are synced for stateful set
	I1009 20:07:05.312231       1 shared_informer.go:320] Caches are synced for expand
	I1009 20:07:05.341071       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 20:07:05.369122       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 20:07:05.426949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="264.708309ms"
	I1009 20:07:05.427245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="185.353µs"
	I1009 20:07:05.790065       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 20:07:05.790191       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:07:05.809608       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 20:07:09.685975       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="49.135µs"
	
	
	==> kube-proxy [3660c79466a817658d79f78e4a83b4bb821be87e9f5ee4b532ec97e991c719c4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:06:28.757315       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:06:28.772140       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.62"]
	E1009 20:06:28.791982       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:06:28.849223       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:06:28.849305       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:06:28.849342       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:06:28.852005       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:06:28.852264       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:06:28.852291       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:06:28.854124       1 config.go:328] "Starting node config controller"
	I1009 20:06:28.854148       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:06:28.854657       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:06:28.854684       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:06:28.854716       1 config.go:199] "Starting service config controller"
	I1009 20:06:28.854722       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:06:28.954293       1 shared_informer.go:320] Caches are synced for node config
	I1009 20:06:28.955463       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:06:28.955520       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [9b0016f0adefdd8defee57e1be9f36f7817c1b2352aeb67223e7503c6724725f] <==
	E1009 20:07:00.831819       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:07:01.947276       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.62"]
	E1009 20:07:01.967548       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:07:02.080307       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:07:02.080459       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:07:02.080582       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:07:02.085300       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:07:02.085616       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:07:02.085656       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:07:02.087239       1 config.go:199] "Starting service config controller"
	I1009 20:07:02.087304       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:07:02.087334       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:07:02.087338       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:07:02.088029       1 config.go:328] "Starting node config controller"
	I1009 20:07:02.088065       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:07:02.188274       1 shared_informer.go:320] Caches are synced for node config
	I1009 20:07:02.188321       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:07:02.188336       1 shared_informer.go:320] Caches are synced for endpoint slice config
	E1009 20:07:26.374989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E1009 20:07:26.375112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)" logger="UnhandledError"
	E1009 20:07:26.375232       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	
	
	==> kube-scheduler [98beb299e399b6794203c4e5f4ea0244df7594eaf1a714fe018613f683cfe1ae] <==
	I1009 20:07:24.009327       1 serving.go:386] Generated self-signed cert in-memory
	I1009 20:07:26.393597       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1009 20:07:26.393635       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:07:26.401419       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I1009 20:07:26.401459       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1009 20:07:26.401517       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:07:26.401529       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 20:07:26.401559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1009 20:07:26.401571       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1009 20:07:26.402096       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1009 20:07:26.402245       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:07:26.501642       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1009 20:07:26.501892       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I1009 20:07:26.501954       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [a5971ccadba8b548a1ff0b6dae1a7758dbffd05e5cd70c260a03d7ce40c6ce87] <==
	E1009 20:07:01.920286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.920383       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:07:01.920415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.920580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 20:07:01.920624       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.920713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:07:01.921523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.921790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:07:01.921892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.922120       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:07:01.923793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.924035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 20:07:01.925160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.924052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 20:07:01.925184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.924568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:07:01.925200       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:07:01.924615       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "system:basic-user" not found
	E1009 20:07:01.925218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found" logger="UnhandledError"
	W1009 20:07:01.924732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	E1009 20:07:01.925244       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError"
	W1009 20:07:01.924769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E1009 20:07:01.925283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError"
	I1009 20:07:02.008464       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 20:07:09.585578       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:22.634817    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/52a42c13f818c813fec5e784d3c3d17a-etcd-certs\") pod \"etcd-kubernetes-upgrade-790037\" (UID: \"52a42c13f818c813fec5e784d3c3d17a\") " pod="kube-system/etcd-kubernetes-upgrade-790037"
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:22.634873    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/03c5f2f4cc36e04236f2e680bb8b3a51-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-790037\" (UID: \"03c5f2f4cc36e04236f2e680bb8b3a51\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-790037"
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:22.634899    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03c5f2f4cc36e04236f2e680bb8b3a51-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-790037\" (UID: \"03c5f2f4cc36e04236f2e680bb8b3a51\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-790037"
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:22.813726    3677 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-790037"
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: E1009 20:07:22.814566    3677 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.62:8443: connect: connection refused" node="kubernetes-upgrade-790037"
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:22.886512    3677 scope.go:117] "RemoveContainer" containerID="9f9307582c12bfbcca527bd1a2e76a7529fabb8c8155e24753a750359e69e6aa"
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:22.887741    3677 scope.go:117] "RemoveContainer" containerID="ee5e711ee7f8410288aef4a0c74aa87a8a7b3c1ef5a1f8a65318181437da1ff4"
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:22.887931    3677 scope.go:117] "RemoveContainer" containerID="fad7c8cf4f9ac7b540dc21ff970bfc394d4c05a4d972adc8a1633d7c61c94760"
	Oct 09 20:07:22 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:22.889151    3677 scope.go:117] "RemoveContainer" containerID="a5971ccadba8b548a1ff0b6dae1a7758dbffd05e5cd70c260a03d7ce40c6ce87"
	Oct 09 20:07:23 kubernetes-upgrade-790037 kubelet[3677]: E1009 20:07:23.030111    3677 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-790037?timeout=10s\": dial tcp 192.168.39.62:8443: connect: connection refused" interval="800ms"
	Oct 09 20:07:23 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:23.216492    3677 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-790037"
	Oct 09 20:07:23 kubernetes-upgrade-790037 kubelet[3677]: E1009 20:07:23.217511    3677 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.62:8443: connect: connection refused" node="kubernetes-upgrade-790037"
	Oct 09 20:07:24 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:24.019616    3677 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-790037"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.416790    3677 apiserver.go:52] "Watching apiserver"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.438140    3677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.446006    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/607f915a-f088-42a6-8b02-1ab3ba9e041e-xtables-lock\") pod \"kube-proxy-kxhnx\" (UID: \"607f915a-f088-42a6-8b02-1ab3ba9e041e\") " pod="kube-system/kube-proxy-kxhnx"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.446574    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/704a9e35-9fc4-4195-b97b-871316905aa9-tmp\") pod \"storage-provisioner\" (UID: \"704a9e35-9fc4-4195-b97b-871316905aa9\") " pod="kube-system/storage-provisioner"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.446651    3677 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/607f915a-f088-42a6-8b02-1ab3ba9e041e-lib-modules\") pod \"kube-proxy-kxhnx\" (UID: \"607f915a-f088-42a6-8b02-1ab3ba9e041e\") " pod="kube-system/kube-proxy-kxhnx"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.537632    3677 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-790037"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.537804    3677 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-790037"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.537935    3677 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.542332    3677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: E1009 20:07:26.589786    3677 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-790037\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-790037"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.730251    3677 scope.go:117] "RemoveContainer" containerID="f6751af980fecda6a6f0e33b0f662360609d5dc762a1cb0a87943c1b1c217f5e"
	Oct 09 20:07:26 kubernetes-upgrade-790037 kubelet[3677]: I1009 20:07:26.736981    3677 scope.go:117] "RemoveContainer" containerID="f20b1387babd69221ac617c35246cd3d1279f26ed5c5d4a1cc9a5576c9ba62cf"
	
	
	==> storage-provisioner [a4da2410dca21c5da3e3b0749a97d881c8a1944214f4d7911c9559069bed9308] <==
	I1009 20:07:26.840145       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:07:26.851301       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:07:26.851351       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [f20b1387babd69221ac617c35246cd3d1279f26ed5c5d4a1cc9a5576c9ba62cf] <==
	I1009 20:06:59.391095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:07:01.963054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:07:01.963135       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:07:01.994662       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:07:01.995107       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-790037_2a1d3a6a-4b03-4bf7-95e9-79368a63930d!
	I1009 20:07:01.997781       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"801e7309-c8b2-41e6-bf20-b9eab604a343", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-790037_2a1d3a6a-4b03-4bf7-95e9-79368a63930d became leader
	I1009 20:07:02.097704       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-790037_2a1d3a6a-4b03-4bf7-95e9-79368a63930d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-790037 -n kubernetes-upgrade-790037
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-790037 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-790037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-790037
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-790037: (1.106051307s)
--- FAIL: TestKubernetesUpgrade (394.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (61.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-739381 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-739381 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (54.671334925s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-739381] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-739381" primary control-plane node in "pause-739381" cluster
	* Updating the running kvm2 "pause-739381" VM ...
	* Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-739381" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:04:00.095709   55086 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:04:00.095836   55086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:04:00.095845   55086 out.go:358] Setting ErrFile to fd 2...
	I1009 20:04:00.095849   55086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:04:00.096547   55086 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:04:00.097403   55086 out.go:352] Setting JSON to false
	I1009 20:04:00.098836   55086 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6381,"bootTime":1728497859,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:04:00.098931   55086 start.go:139] virtualization: kvm guest
	I1009 20:04:00.101160   55086 out.go:177] * [pause-739381] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:04:00.102781   55086 notify.go:220] Checking for updates...
	I1009 20:04:00.102788   55086 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:04:00.104417   55086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:04:00.105787   55086 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:04:00.107315   55086 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:04:00.108702   55086 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:04:00.110206   55086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:04:00.112068   55086 config.go:182] Loaded profile config "pause-739381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:00.112649   55086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:04:00.112709   55086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:04:00.128697   55086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38519
	I1009 20:04:00.129160   55086 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:04:00.129848   55086 main.go:141] libmachine: Using API Version  1
	I1009 20:04:00.129873   55086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:04:00.130211   55086 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:04:00.130411   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:00.130622   55086 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:04:00.130890   55086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:04:00.130926   55086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:04:00.146256   55086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
	I1009 20:04:00.146783   55086 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:04:00.147555   55086 main.go:141] libmachine: Using API Version  1
	I1009 20:04:00.147591   55086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:04:00.147924   55086 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:04:00.148106   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:00.184476   55086 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:04:00.185996   55086 start.go:297] selected driver: kvm2
	I1009 20:04:00.186010   55086 start.go:901] validating driver "kvm2" against &{Name:pause-739381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:pause-739381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:04:00.186140   55086 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:04:00.186451   55086 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:04:00.186543   55086 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:04:00.201719   55086 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:04:00.202798   55086 cni.go:84] Creating CNI manager for ""
	I1009 20:04:00.202866   55086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:04:00.202949   55086 start.go:340] cluster config:
	{Name:pause-739381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:pause-739381 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-ali
ases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:04:00.203163   55086 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:04:00.205275   55086 out.go:177] * Starting "pause-739381" primary control-plane node in "pause-739381" cluster
	I1009 20:04:00.206551   55086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:04:00.206595   55086 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 20:04:00.206607   55086 cache.go:56] Caching tarball of preloaded images
	I1009 20:04:00.206684   55086 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:04:00.206699   55086 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 20:04:00.206828   55086 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381/config.json ...
	I1009 20:04:00.207049   55086 start.go:360] acquireMachinesLock for pause-739381: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:04:13.791744   55086 start.go:364] duration metric: took 13.584633687s to acquireMachinesLock for "pause-739381"
	I1009 20:04:13.791792   55086 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:04:13.791800   55086 fix.go:54] fixHost starting: 
	I1009 20:04:13.792217   55086 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:04:13.792280   55086 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:04:13.811284   55086 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36969
	I1009 20:04:13.811687   55086 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:04:13.812169   55086 main.go:141] libmachine: Using API Version  1
	I1009 20:04:13.812192   55086 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:04:13.812475   55086 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:04:13.812648   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:13.812765   55086 main.go:141] libmachine: (pause-739381) Calling .GetState
	I1009 20:04:13.814490   55086 fix.go:112] recreateIfNeeded on pause-739381: state=Running err=<nil>
	W1009 20:04:13.814505   55086 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:04:13.816666   55086 out.go:177] * Updating the running kvm2 "pause-739381" VM ...
	I1009 20:04:13.817978   55086 machine.go:93] provisionDockerMachine start ...
	I1009 20:04:13.817997   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:13.818159   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:13.820307   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:13.820674   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:13.820698   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:13.820905   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:13.821084   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:13.821232   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:13.821403   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:13.822497   55086 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:13.822747   55086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.224 22 <nil> <nil>}
	I1009 20:04:13.822770   55086 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:04:13.940267   55086 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-739381
	
	I1009 20:04:13.940297   55086 main.go:141] libmachine: (pause-739381) Calling .GetMachineName
	I1009 20:04:13.940541   55086 buildroot.go:166] provisioning hostname "pause-739381"
	I1009 20:04:13.940571   55086 main.go:141] libmachine: (pause-739381) Calling .GetMachineName
	I1009 20:04:13.940741   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:13.943898   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:13.944229   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:13.944264   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:13.944421   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:13.944627   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:13.944778   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:13.944932   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:13.945129   55086 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:13.945286   55086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.224 22 <nil> <nil>}
	I1009 20:04:13.945297   55086 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-739381 && echo "pause-739381" | sudo tee /etc/hostname
	I1009 20:04:14.072895   55086 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-739381
	
	I1009 20:04:14.072921   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:14.075776   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.076163   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:14.076191   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.076290   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:14.076450   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:14.076584   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:14.076747   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:14.076877   55086 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:14.077049   55086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.224 22 <nil> <nil>}
	I1009 20:04:14.077064   55086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-739381' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-739381/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-739381' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:04:14.196073   55086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:04:14.196104   55086 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:04:14.196138   55086 buildroot.go:174] setting up certificates
	I1009 20:04:14.196148   55086 provision.go:84] configureAuth start
	I1009 20:04:14.196161   55086 main.go:141] libmachine: (pause-739381) Calling .GetMachineName
	I1009 20:04:14.196481   55086 main.go:141] libmachine: (pause-739381) Calling .GetIP
	I1009 20:04:14.199675   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.200129   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:14.200160   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.200312   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:14.202988   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.203367   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:14.203393   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.203548   55086 provision.go:143] copyHostCerts
	I1009 20:04:14.203616   55086 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:04:14.203631   55086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:04:14.203697   55086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:04:14.203835   55086 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:04:14.203845   55086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:04:14.203868   55086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:04:14.203933   55086 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:04:14.203941   55086 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:04:14.203958   55086 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:04:14.204019   55086 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.pause-739381 san=[127.0.0.1 192.168.50.224 localhost minikube pause-739381]
	I1009 20:04:14.309378   55086 provision.go:177] copyRemoteCerts
	I1009 20:04:14.309432   55086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:04:14.309452   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:14.312457   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.312785   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:14.312817   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.313033   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:14.313210   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:14.313363   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:14.313499   55086 sshutil.go:53] new ssh client: &{IP:192.168.50.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/pause-739381/id_rsa Username:docker}
	I1009 20:04:14.412274   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:04:14.442923   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1009 20:04:14.473002   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:04:14.502543   55086 provision.go:87] duration metric: took 306.382029ms to configureAuth
	I1009 20:04:14.502577   55086 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:04:14.502842   55086 config.go:182] Loaded profile config "pause-739381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:14.502932   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:14.505828   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.506223   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:14.506267   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:14.506464   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:14.506672   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:14.506860   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:14.507015   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:14.507195   55086 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:14.507401   55086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.224 22 <nil> <nil>}
	I1009 20:04:14.507421   55086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:04:20.077067   55086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:04:20.077116   55086 machine.go:96] duration metric: took 6.259124586s to provisionDockerMachine
	I1009 20:04:20.077131   55086 start.go:293] postStartSetup for "pause-739381" (driver="kvm2")
	I1009 20:04:20.077144   55086 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:04:20.077164   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:20.077600   55086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:04:20.077629   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:20.080924   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.081258   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:20.081281   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.081461   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:20.081665   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:20.081817   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:20.081996   55086 sshutil.go:53] new ssh client: &{IP:192.168.50.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/pause-739381/id_rsa Username:docker}
	I1009 20:04:20.178171   55086 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:04:20.184441   55086 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:04:20.184472   55086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:04:20.184533   55086 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:04:20.184627   55086 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:04:20.184753   55086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:04:20.198143   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:04:20.230169   55086 start.go:296] duration metric: took 153.013504ms for postStartSetup
	I1009 20:04:20.230225   55086 fix.go:56] duration metric: took 6.438423889s for fixHost
	I1009 20:04:20.230272   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:20.233508   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.233872   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:20.233903   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.234178   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:20.234370   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:20.234541   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:20.234687   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:20.234848   55086 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:20.235092   55086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.224 22 <nil> <nil>}
	I1009 20:04:20.235107   55086 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:04:20.356209   55086 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728504260.345281242
	
	I1009 20:04:20.356234   55086 fix.go:216] guest clock: 1728504260.345281242
	I1009 20:04:20.356245   55086 fix.go:229] Guest: 2024-10-09 20:04:20.345281242 +0000 UTC Remote: 2024-10-09 20:04:20.23023052 +0000 UTC m=+20.180088619 (delta=115.050722ms)
	I1009 20:04:20.356292   55086 fix.go:200] guest clock delta is within tolerance: 115.050722ms
	I1009 20:04:20.356299   55086 start.go:83] releasing machines lock for "pause-739381", held for 6.564523297s
	I1009 20:04:20.356323   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:20.356586   55086 main.go:141] libmachine: (pause-739381) Calling .GetIP
	I1009 20:04:20.359426   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.359731   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:20.359754   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.359907   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:20.360361   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:20.360519   55086 main.go:141] libmachine: (pause-739381) Calling .DriverName
	I1009 20:04:20.360628   55086 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:04:20.360699   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:20.360717   55086 ssh_runner.go:195] Run: cat /version.json
	I1009 20:04:20.360733   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHHostname
	I1009 20:04:20.363864   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.364164   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.364196   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:20.364210   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.364354   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:20.364530   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:20.364670   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:20.364866   55086 sshutil.go:53] new ssh client: &{IP:192.168.50.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/pause-739381/id_rsa Username:docker}
	I1009 20:04:20.364994   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:20.365019   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:20.365403   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHPort
	I1009 20:04:20.365542   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHKeyPath
	I1009 20:04:20.365695   55086 main.go:141] libmachine: (pause-739381) Calling .GetSSHUsername
	I1009 20:04:20.365810   55086 sshutil.go:53] new ssh client: &{IP:192.168.50.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/pause-739381/id_rsa Username:docker}
	I1009 20:04:20.456944   55086 ssh_runner.go:195] Run: systemctl --version
	I1009 20:04:20.484330   55086 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:04:20.645119   55086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:04:20.652110   55086 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:04:20.652178   55086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:04:20.661681   55086 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1009 20:04:20.661702   55086 start.go:495] detecting cgroup driver to use...
	I1009 20:04:20.661761   55086 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:04:20.684874   55086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:04:20.703350   55086 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:04:20.703405   55086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:04:20.719233   55086 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:04:20.738321   55086 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:04:20.922105   55086 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:04:21.083006   55086 docker.go:233] disabling docker service ...
	I1009 20:04:21.083166   55086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:04:21.101060   55086 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:04:21.115705   55086 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:04:21.256574   55086 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:04:21.401432   55086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:04:21.416560   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:04:21.441010   55086 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:04:21.441074   55086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:21.452057   55086 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:04:21.452134   55086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:21.462832   55086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:21.473228   55086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:21.483959   55086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:04:21.494594   55086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:21.504676   55086 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:21.516709   55086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:21.531660   55086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:04:21.541993   55086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:04:21.555394   55086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:04:21.694619   55086 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:04:28.122130   55086 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.427475567s)
	I1009 20:04:28.122156   55086 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:04:28.122198   55086 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:04:28.128836   55086 start.go:563] Will wait 60s for crictl version
	I1009 20:04:28.128892   55086 ssh_runner.go:195] Run: which crictl
	I1009 20:04:28.133208   55086 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:04:28.176606   55086 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:04:28.176692   55086 ssh_runner.go:195] Run: crio --version
	I1009 20:04:28.210150   55086 ssh_runner.go:195] Run: crio --version
	I1009 20:04:28.245162   55086 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:04:28.246388   55086 main.go:141] libmachine: (pause-739381) Calling .GetIP
	I1009 20:04:28.249189   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:28.249635   55086 main.go:141] libmachine: (pause-739381) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:0f:a3", ip: ""} in network mk-pause-739381: {Iface:virbr2 ExpiryTime:2024-10-09 21:02:52 +0000 UTC Type:0 Mac:52:54:00:b4:0f:a3 Iaid: IPaddr:192.168.50.224 Prefix:24 Hostname:pause-739381 Clientid:01:52:54:00:b4:0f:a3}
	I1009 20:04:28.249658   55086 main.go:141] libmachine: (pause-739381) DBG | domain pause-739381 has defined IP address 192.168.50.224 and MAC address 52:54:00:b4:0f:a3 in network mk-pause-739381
	I1009 20:04:28.249908   55086 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 20:04:28.254598   55086 kubeadm.go:883] updating cluster {Name:pause-739381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:pause-739381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-se
curity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:04:28.254713   55086 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:04:28.254758   55086 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:04:28.305127   55086 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:04:28.305149   55086 crio.go:433] Images already preloaded, skipping extraction
	I1009 20:04:28.305197   55086 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:04:28.351185   55086 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:04:28.351206   55086 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:04:28.351215   55086 kubeadm.go:934] updating node { 192.168.50.224 8443 v1.31.1 crio true true} ...
	I1009 20:04:28.351342   55086 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-739381 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:pause-739381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:04:28.351410   55086 ssh_runner.go:195] Run: crio config
	I1009 20:04:28.411649   55086 cni.go:84] Creating CNI manager for ""
	I1009 20:04:28.411676   55086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:04:28.411692   55086 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:04:28.411721   55086 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.224 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-739381 NodeName:pause-739381 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.224"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.224 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:04:28.411900   55086 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.224
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-739381"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.224
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.224"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:04:28.411963   55086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:04:28.428198   55086 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:04:28.428259   55086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:04:28.440556   55086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1009 20:04:28.463037   55086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:04:28.484542   55086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I1009 20:04:28.506574   55086 ssh_runner.go:195] Run: grep 192.168.50.224	control-plane.minikube.internal$ /etc/hosts
	I1009 20:04:28.511086   55086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:04:28.656976   55086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:04:28.672226   55086 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381 for IP: 192.168.50.224
	I1009 20:04:28.672253   55086 certs.go:194] generating shared ca certs ...
	I1009 20:04:28.672273   55086 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:28.672490   55086 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:04:28.672558   55086 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:04:28.672573   55086 certs.go:256] generating profile certs ...
	I1009 20:04:28.672681   55086 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381/client.key
	I1009 20:04:28.672784   55086 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381/apiserver.key.1ae154fd
	I1009 20:04:28.672840   55086 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381/proxy-client.key
	I1009 20:04:28.672962   55086 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:04:28.672994   55086 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:04:28.673005   55086 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:04:28.673075   55086 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:04:28.673121   55086 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:04:28.673146   55086 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:04:28.673195   55086 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:04:28.673826   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:04:28.702252   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:04:28.730049   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:04:28.758163   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:04:28.789649   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 20:04:28.821233   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:04:28.850085   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:04:28.876598   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/pause-739381/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:04:28.902702   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:04:28.930001   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:04:28.956048   55086 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:04:28.979752   55086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:04:28.997008   55086 ssh_runner.go:195] Run: openssl version
	I1009 20:04:29.003149   55086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:04:29.016477   55086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:04:29.023242   55086 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:04:29.023306   55086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:04:29.029570   55086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:04:29.039992   55086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:04:29.053428   55086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:04:29.058208   55086 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:04:29.058252   55086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:04:29.064734   55086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:04:29.074616   55086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:04:29.086124   55086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:04:29.090983   55086 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:04:29.091027   55086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:04:29.096737   55086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:04:29.107113   55086 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:04:29.112866   55086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:04:29.122051   55086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:04:29.133233   55086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:04:29.198981   55086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:04:29.211316   55086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:04:29.288741   55086 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:04:29.330955   55086 kubeadm.go:392] StartCluster: {Name:pause-739381 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:pause-739381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:04:29.331075   55086 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:04:29.331139   55086 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:04:29.525283   55086 cri.go:89] found id: "321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691"
	I1009 20:04:29.525314   55086 cri.go:89] found id: "493e04832e62dc97fe2110dd7f7d6cb2e69c3bce37c556b564d3c3b01f03b22b"
	I1009 20:04:29.525322   55086 cri.go:89] found id: "e6d2cda0bc7e2c3e56813d8146daf7196db3b0c12f6134f6a5edc1666e25e0fa"
	I1009 20:04:29.525328   55086 cri.go:89] found id: "f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c"
	I1009 20:04:29.525333   55086 cri.go:89] found id: "e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e"
	I1009 20:04:29.525338   55086 cri.go:89] found id: "d131c306c9b5fafb763f395f1e1a5d9189ec04431780d73c6aac5b610032b9d4"
	I1009 20:04:29.525343   55086 cri.go:89] found id: ""
	I1009 20:04:29.525395   55086 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-739381 -n pause-739381
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-739381 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-739381 logs -n 25: (1.586734879s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 19:59 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 19:59 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 19:59 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 19:59 UTC | 09 Oct 24 19:59 UTC |
	|         | --cancel-scheduled                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC | 09 Oct 24 20:00 UTC |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC | 09 Oct 24 20:00 UTC |
	| start   | -p kubernetes-upgrade-790037       | kubernetes-upgrade-790037 | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p offline-crio-035060             | offline-crio-035060       | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC | 09 Oct 24 20:02 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048                 |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-111682          | minikube                  | jenkins | v1.26.0 | 09 Oct 24 20:01 UTC | 09 Oct 24 20:02 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-200546          | minikube                  | jenkins | v1.26.0 | 09 Oct 24 20:01 UTC | 09 Oct 24 20:03 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p offline-crio-035060             | offline-crio-035060       | jenkins | v1.34.0 | 09 Oct 24 20:02 UTC | 09 Oct 24 20:02 UTC |
	| start   | -p pause-739381 --memory=2048      | pause-739381              | jenkins | v1.34.0 | 09 Oct 24 20:02 UTC | 09 Oct 24 20:04 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-111682 stop        | minikube                  | jenkins | v1.26.0 | 09 Oct 24 20:02 UTC | 09 Oct 24 20:02 UTC |
	| start   | -p stopped-upgrade-111682          | stopped-upgrade-111682    | jenkins | v1.34.0 | 09 Oct 24 20:02 UTC | 09 Oct 24 20:03 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-200546          | running-upgrade-200546    | jenkins | v1.34.0 | 09 Oct 24 20:03 UTC | 09 Oct 24 20:04 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-111682          | stopped-upgrade-111682    | jenkins | v1.34.0 | 09 Oct 24 20:03 UTC | 09 Oct 24 20:03 UTC |
	| start   | -p force-systemd-flag-499844       | force-systemd-flag-499844 | jenkins | v1.34.0 | 09 Oct 24 20:03 UTC | 09 Oct 24 20:04 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-739381                    | pause-739381              | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC | 09 Oct 24 20:04 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-200546          | running-upgrade-200546    | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC | 09 Oct 24 20:04 UTC |
	| start   | -p cert-expiration-261596          | cert-expiration-261596    | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-499844 ssh cat  | force-systemd-flag-499844 | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC | 09 Oct 24 20:04 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-499844       | force-systemd-flag-499844 | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC | 09 Oct 24 20:04 UTC |
	| start   | -p cert-options-744883             | cert-options-744883       | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:04:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:04:34.293812   55627 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:04:34.293894   55627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:04:34.293897   55627 out.go:358] Setting ErrFile to fd 2...
	I1009 20:04:34.293900   55627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:04:34.294568   55627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:04:34.295472   55627 out.go:352] Setting JSON to false
	I1009 20:04:34.296894   55627 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6415,"bootTime":1728497859,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:04:34.297017   55627 start.go:139] virtualization: kvm guest
	I1009 20:04:34.299026   55627 out.go:177] * [cert-options-744883] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:04:34.300246   55627 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:04:34.300253   55627 notify.go:220] Checking for updates...
	I1009 20:04:34.301369   55627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:04:34.302546   55627 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:04:34.303867   55627 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:04:34.305037   55627 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:04:34.306112   55627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:04:34.307864   55627 config.go:182] Loaded profile config "cert-expiration-261596": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:34.308020   55627 config.go:182] Loaded profile config "kubernetes-upgrade-790037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:04:34.308206   55627 config.go:182] Loaded profile config "pause-739381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:34.308346   55627 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:04:34.351889   55627 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 20:04:34.353103   55627 start.go:297] selected driver: kvm2
	I1009 20:04:34.353111   55627 start.go:901] validating driver "kvm2" against <nil>
	I1009 20:04:34.353122   55627 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:04:34.354075   55627 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:04:34.354163   55627 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:04:34.371891   55627 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:04:34.371925   55627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 20:04:34.372161   55627 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 20:04:34.372184   55627 cni.go:84] Creating CNI manager for ""
	I1009 20:04:34.372224   55627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:04:34.372229   55627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 20:04:34.372270   55627 start.go:340] cluster config:
	{Name:cert-options-744883 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:cert-options-744883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1009 20:04:34.372344   55627 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:04:34.373947   55627 out.go:177] * Starting "cert-options-744883" primary control-plane node in "cert-options-744883" cluster
	I1009 20:04:30.715743   55086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:04:30.763554   55086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:04:30.781804   55086 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct  9 20:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Oct  9 20:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Oct  9 20:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Oct  9 20:03 /etc/kubernetes/scheduler.conf
	
	I1009 20:04:30.781879   55086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:04:30.796144   55086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:04:30.809776   55086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:04:30.823771   55086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:04:30.823834   55086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:04:30.837350   55086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:04:30.847683   55086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:04:30.847739   55086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:04:30.858007   55086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:04:30.868625   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:30.947704   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:32.043556   55086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.095816091s)
	I1009 20:04:32.043587   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:32.317877   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:32.418205   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:32.543187   55086 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:04:32.543267   55086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:04:33.044243   55086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:04:33.543775   55086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:04:33.561808   55086 api_server.go:72] duration metric: took 1.018620525s to wait for apiserver process to appear ...
	I1009 20:04:33.561833   55086 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:04:33.561853   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:35.291167   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:04:35.291201   55086 api_server.go:103] status: https://192.168.50.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:04:35.291215   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:35.383034   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:04:35.383078   55086 api_server.go:103] status: https://192.168.50.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:04:35.562463   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:35.566772   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:04:35.566799   55086 api_server.go:103] status: https://192.168.50.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:04:36.062341   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:36.070427   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:04:36.070461   55086 api_server.go:103] status: https://192.168.50.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:04:36.562091   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:36.567693   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 200:
	ok
	I1009 20:04:36.574752   55086 api_server.go:141] control plane version: v1.31.1
	I1009 20:04:36.574781   55086 api_server.go:131] duration metric: took 3.012939649s to wait for apiserver health ...
	I1009 20:04:36.574791   55086 cni.go:84] Creating CNI manager for ""
	I1009 20:04:36.574800   55086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:04:36.576825   55086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:04:34.099995   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:34.100556   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:34.100589   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:34.100515   55382 retry.go:31] will retry after 905.323907ms: waiting for machine to come up
	I1009 20:04:35.007709   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:35.008155   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:35.008177   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:35.008103   55382 retry.go:31] will retry after 1.250762936s: waiting for machine to come up
	I1009 20:04:36.260161   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:36.260667   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:36.260682   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:36.260618   55382 retry.go:31] will retry after 1.632979014s: waiting for machine to come up
	I1009 20:04:37.895157   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:37.895622   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:37.895645   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:37.895546   55382 retry.go:31] will retry after 1.925863332s: waiting for machine to come up
	I1009 20:04:34.375045   55627 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:04:34.375088   55627 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 20:04:34.375103   55627 cache.go:56] Caching tarball of preloaded images
	I1009 20:04:34.375165   55627 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:04:34.375171   55627 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 20:04:34.375256   55627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/cert-options-744883/config.json ...
	I1009 20:04:34.375268   55627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/cert-options-744883/config.json: {Name:mkd13da49e06018a60f9bc49685ab7c9d04458e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:34.375383   55627 start.go:360] acquireMachinesLock for cert-options-744883: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:04:36.578379   55086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:04:36.589904   55086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:04:36.609178   55086 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:04:36.609258   55086 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 20:04:36.609291   55086 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 20:04:36.624900   55086 system_pods.go:59] 6 kube-system pods found
	I1009 20:04:36.624945   55086 system_pods.go:61] "coredns-7c65d6cfc9-5srcm" [0590290c-6489-499e-91ae-a553df99329f] Running
	I1009 20:04:36.624957   55086 system_pods.go:61] "etcd-pause-739381" [1b986c95-df7c-4fa0-8a45-c4cdc019e1ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:04:36.624967   55086 system_pods.go:61] "kube-apiserver-pause-739381" [2bfe7580-30af-43dc-a6cc-6b17f0bcc15f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:04:36.624976   55086 system_pods.go:61] "kube-controller-manager-pause-739381" [ff13657b-af05-43a8-ad81-d82135ffe263] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:04:36.624994   55086 system_pods.go:61] "kube-proxy-l9sfg" [78c75730-3c5a-44c7-8091-65b6eb07a4f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:04:36.625002   55086 system_pods.go:61] "kube-scheduler-pause-739381" [c96f1eb3-5b46-4432-b769-944db5d80906] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:04:36.625014   55086 system_pods.go:74] duration metric: took 15.814216ms to wait for pod list to return data ...
	I1009 20:04:36.625025   55086 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:04:36.628881   55086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:04:36.628911   55086 node_conditions.go:123] node cpu capacity is 2
	I1009 20:04:36.628924   55086 node_conditions.go:105] duration metric: took 3.891005ms to run NodePressure ...
	I1009 20:04:36.628945   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:36.913141   55086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:04:36.917671   55086 kubeadm.go:739] kubelet initialised
	I1009 20:04:36.917689   55086 kubeadm.go:740] duration metric: took 4.524911ms waiting for restarted kubelet to initialise ...
	I1009 20:04:36.917697   55086 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:04:36.922288   55086 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:36.927168   55086 pod_ready.go:93] pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:36.927184   55086 pod_ready.go:82] duration metric: took 4.876329ms for pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:36.927191   55086 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:38.933809   55086 pod_ready.go:103] pod "etcd-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:39.823474   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:39.824049   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:39.824069   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:39.823996   55382 retry.go:31] will retry after 2.675328453s: waiting for machine to come up
	I1009 20:04:42.500440   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:42.500928   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:42.500947   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:42.500860   55382 retry.go:31] will retry after 3.920094446s: waiting for machine to come up
	I1009 20:04:40.935183   55086 pod_ready.go:103] pod "etcd-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:41.933752   55086 pod_ready.go:93] pod "etcd-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:41.933774   55086 pod_ready.go:82] duration metric: took 5.006577279s for pod "etcd-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:41.933782   55086 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:43.939788   55086 pod_ready.go:103] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:46.423599   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:46.423988   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:46.424003   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:46.423934   55382 retry.go:31] will retry after 3.644416129s: waiting for machine to come up
	I1009 20:04:45.940628   55086 pod_ready.go:103] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:48.440021   55086 pod_ready.go:103] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:50.440211   55086 pod_ready.go:103] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:51.439838   55086 pod_ready.go:93] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.439862   55086 pod_ready.go:82] duration metric: took 9.506073682s for pod "kube-apiserver-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.439872   55086 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.445184   55086 pod_ready.go:93] pod "kube-controller-manager-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.445203   55086 pod_ready.go:82] duration metric: took 5.3254ms for pod "kube-controller-manager-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.445213   55086 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l9sfg" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.449938   55086 pod_ready.go:93] pod "kube-proxy-l9sfg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.449960   55086 pod_ready.go:82] duration metric: took 4.739191ms for pod "kube-proxy-l9sfg" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.449971   55086 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.453845   55086 pod_ready.go:93] pod "kube-scheduler-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.453863   55086 pod_ready.go:82] duration metric: took 3.885625ms for pod "kube-scheduler-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.453870   55086 pod_ready.go:39] duration metric: took 14.536165355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:04:51.453884   55086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:04:51.467466   55086 ops.go:34] apiserver oom_adj: -16
	I1009 20:04:51.467484   55086 kubeadm.go:597] duration metric: took 21.797230687s to restartPrimaryControlPlane
	I1009 20:04:51.467493   55086 kubeadm.go:394] duration metric: took 22.136557539s to StartCluster
	I1009 20:04:51.467511   55086 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:51.467582   55086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:04:51.468246   55086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:51.468456   55086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:04:51.468532   55086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:04:51.468763   55086 config.go:182] Loaded profile config "pause-739381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:51.470476   55086 out.go:177] * Enabled addons: 
	I1009 20:04:51.470493   55086 out.go:177] * Verifying Kubernetes components...
	I1009 20:04:51.609352   55627 start.go:364] duration metric: took 17.23395098s to acquireMachinesLock for "cert-options-744883"
	I1009 20:04:51.609419   55627 start.go:93] Provisioning new machine with config: &{Name:cert-options-744883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:cert-options-744883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:04:51.609520   55627 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 20:04:50.070259   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.070732   55359 main.go:141] libmachine: (cert-expiration-261596) Found IP for machine: 192.168.72.252
	I1009 20:04:50.070750   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has current primary IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.070757   55359 main.go:141] libmachine: (cert-expiration-261596) Reserving static IP address...
	I1009 20:04:50.071164   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find host DHCP lease matching {name: "cert-expiration-261596", mac: "52:54:00:fe:e5:2b", ip: "192.168.72.252"} in network mk-cert-expiration-261596
	I1009 20:04:50.144156   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | Getting to WaitForSSH function...
	I1009 20:04:50.144177   55359 main.go:141] libmachine: (cert-expiration-261596) Reserved static IP address: 192.168.72.252
	I1009 20:04:50.144188   55359 main.go:141] libmachine: (cert-expiration-261596) Waiting for SSH to be available...
	I1009 20:04:50.146893   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.147425   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.147444   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.147573   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | Using SSH client type: external
	I1009 20:04:50.147595   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa (-rw-------)
	I1009 20:04:50.147621   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:04:50.147628   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | About to run SSH command:
	I1009 20:04:50.147639   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | exit 0
	I1009 20:04:50.275185   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | SSH cmd err, output: <nil>: 
	I1009 20:04:50.275426   55359 main.go:141] libmachine: (cert-expiration-261596) KVM machine creation complete!
	I1009 20:04:50.275712   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetConfigRaw
	I1009 20:04:50.276500   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:50.276692   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:50.276841   55359 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 20:04:50.276848   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetState
	I1009 20:04:50.278306   55359 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 20:04:50.278315   55359 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 20:04:50.278320   55359 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 20:04:50.278327   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.280512   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.280889   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.280909   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.281063   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.281217   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.281352   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.281459   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.281574   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:50.281794   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:50.281802   55359 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 20:04:50.398207   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:04:50.398222   55359 main.go:141] libmachine: Detecting the provisioner...
	I1009 20:04:50.398231   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.400821   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.401118   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.401140   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.401257   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.401443   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.401595   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.401718   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.401898   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:50.402057   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:50.402064   55359 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 20:04:50.515656   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 20:04:50.515698   55359 main.go:141] libmachine: found compatible host: buildroot
	I1009 20:04:50.515706   55359 main.go:141] libmachine: Provisioning with buildroot...
	I1009 20:04:50.515714   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetMachineName
	I1009 20:04:50.515919   55359 buildroot.go:166] provisioning hostname "cert-expiration-261596"
	I1009 20:04:50.515935   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetMachineName
	I1009 20:04:50.516085   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.518779   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.519120   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.519134   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.519278   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.519407   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.519525   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.519649   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.519793   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:50.519962   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:50.519970   55359 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-261596 && echo "cert-expiration-261596" | sudo tee /etc/hostname
	I1009 20:04:50.645969   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-261596
	
	I1009 20:04:50.645990   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.648398   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.648677   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.648692   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.648845   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.648996   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.649149   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.649235   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.649342   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:50.649516   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:50.649527   55359 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-261596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-261596/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-261596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:04:50.772242   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:04:50.772260   55359 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:04:50.772287   55359 buildroot.go:174] setting up certificates
	I1009 20:04:50.772300   55359 provision.go:84] configureAuth start
	I1009 20:04:50.772307   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetMachineName
	I1009 20:04:50.772570   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetIP
	I1009 20:04:50.774988   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.775320   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.775343   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.775529   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.777608   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.777850   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.777887   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.777962   55359 provision.go:143] copyHostCerts
	I1009 20:04:50.778026   55359 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:04:50.778040   55359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:04:50.778125   55359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:04:50.778237   55359 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:04:50.778242   55359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:04:50.778270   55359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:04:50.778322   55359 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:04:50.778325   55359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:04:50.778351   55359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:04:50.778389   55359 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-261596 san=[127.0.0.1 192.168.72.252 cert-expiration-261596 localhost minikube]
	I1009 20:04:50.959144   55359 provision.go:177] copyRemoteCerts
	I1009 20:04:50.959188   55359 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:04:50.959208   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.961460   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.961721   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.961744   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.961856   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.962002   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.962110   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.962232   55359 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa Username:docker}
	I1009 20:04:51.049790   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:04:51.073151   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:04:51.096490   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:04:51.119748   55359 provision.go:87] duration metric: took 347.437378ms to configureAuth
	I1009 20:04:51.119782   55359 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:04:51.119953   55359 config.go:182] Loaded profile config "cert-expiration-261596": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:51.120024   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.122389   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.122689   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.122708   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.122816   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.122995   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.123145   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.123244   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.123403   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:51.123559   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:51.123578   55359 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:04:51.348235   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:04:51.348253   55359 main.go:141] libmachine: Checking connection to Docker...
	I1009 20:04:51.348263   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetURL
	I1009 20:04:51.349446   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | Using libvirt version 6000000
	I1009 20:04:51.351384   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.351659   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.351673   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.351774   55359 main.go:141] libmachine: Docker is up and running!
	I1009 20:04:51.351780   55359 main.go:141] libmachine: Reticulating splines...
	I1009 20:04:51.351801   55359 client.go:171] duration metric: took 23.068631162s to LocalClient.Create
	I1009 20:04:51.351825   55359 start.go:167] duration metric: took 23.068700541s to libmachine.API.Create "cert-expiration-261596"
	I1009 20:04:51.351832   55359 start.go:293] postStartSetup for "cert-expiration-261596" (driver="kvm2")
	I1009 20:04:51.351845   55359 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:04:51.351861   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.352073   55359 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:04:51.352096   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.354274   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.354570   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.354589   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.354712   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.354850   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.354981   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.355112   55359 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa Username:docker}
	I1009 20:04:51.442260   55359 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:04:51.447217   55359 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:04:51.447231   55359 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:04:51.447304   55359 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:04:51.447401   55359 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:04:51.447519   55359 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:04:51.457780   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:04:51.481923   55359 start.go:296] duration metric: took 130.079198ms for postStartSetup
	I1009 20:04:51.481983   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetConfigRaw
	I1009 20:04:51.482542   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetIP
	I1009 20:04:51.485188   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.485571   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.485586   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.485821   55359 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/cert-expiration-261596/config.json ...
	I1009 20:04:51.485998   55359 start.go:128] duration metric: took 23.225277895s to createHost
	I1009 20:04:51.486013   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.488219   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.488544   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.488564   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.488677   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.488831   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.488970   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.489093   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.489193   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:51.489359   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:51.489366   55359 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:04:51.609223   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728504291.580727681
	
	I1009 20:04:51.609237   55359 fix.go:216] guest clock: 1728504291.580727681
	I1009 20:04:51.609242   55359 fix.go:229] Guest: 2024-10-09 20:04:51.580727681 +0000 UTC Remote: 2024-10-09 20:04:51.486003182 +0000 UTC m=+23.350340866 (delta=94.724499ms)
	I1009 20:04:51.609258   55359 fix.go:200] guest clock delta is within tolerance: 94.724499ms
	I1009 20:04:51.609263   55359 start.go:83] releasing machines lock for "cert-expiration-261596", held for 23.348597963s
	I1009 20:04:51.609284   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.609525   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetIP
	I1009 20:04:51.612292   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.612676   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.612711   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.612873   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.613420   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.613611   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.613717   55359 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:04:51.613754   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.613783   55359 ssh_runner.go:195] Run: cat /version.json
	I1009 20:04:51.613818   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.616666   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.616779   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.617011   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.617030   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.617059   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.617069   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.617189   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.617299   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.617375   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.617535   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.617568   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.617708   55359 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa Username:docker}
	I1009 20:04:51.617733   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.617856   55359 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa Username:docker}
	I1009 20:04:51.701154   55359 ssh_runner.go:195] Run: systemctl --version
	I1009 20:04:51.730983   55359 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:04:51.892798   55359 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:04:51.899046   55359 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:04:51.899124   55359 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:04:51.916404   55359 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:04:51.916416   55359 start.go:495] detecting cgroup driver to use...
	I1009 20:04:51.916478   55359 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:04:51.932484   55359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:04:51.946899   55359 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:04:51.946949   55359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:04:51.961076   55359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:04:51.973989   55359 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:04:52.089075   55359 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:04:52.256748   55359 docker.go:233] disabling docker service ...
	I1009 20:04:52.256815   55359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:04:52.271043   55359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:04:52.284014   55359 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:04:52.399764   55359 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:04:52.524440   55359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:04:52.537971   55359 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:04:52.557961   55359 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:04:52.558023   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.569398   55359 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:04:52.569447   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.580615   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.590915   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.600869   55359 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:04:52.611020   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.620983   55359 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.638042   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.648382   55359 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:04:52.661102   55359 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:04:52.661134   55359 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:04:52.676152   55359 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:04:52.687352   55359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:04:52.803648   55359 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:04:52.902646   55359 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:04:52.902731   55359 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:04:52.907431   55359 start.go:563] Will wait 60s for crictl version
	I1009 20:04:52.907484   55359 ssh_runner.go:195] Run: which crictl
	I1009 20:04:52.911226   55359 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:04:52.951000   55359 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:04:52.951092   55359 ssh_runner.go:195] Run: crio --version
	I1009 20:04:52.980122   55359 ssh_runner.go:195] Run: crio --version
	I1009 20:04:53.011437   55359 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:04:53.012667   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetIP
	I1009 20:04:53.015594   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:53.015932   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:53.015952   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:53.016134   55359 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 20:04:53.020217   55359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:04:53.032394   55359 kubeadm.go:883] updating cluster {Name:cert-expiration-261596 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:cert-expiration-261596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.252 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:04:53.032905   55359 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:04:53.033000   55359 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:04:53.070007   55359 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:04:53.070058   55359 ssh_runner.go:195] Run: which lz4
	I1009 20:04:53.074090   55359 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:04:53.078434   55359 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:04:53.078452   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:04:51.611574   55627 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 20:04:51.611748   55627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:04:51.611796   55627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:04:51.628680   55627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I1009 20:04:51.629036   55627 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:04:51.629610   55627 main.go:141] libmachine: Using API Version  1
	I1009 20:04:51.629624   55627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:04:51.629977   55627 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:04:51.630157   55627 main.go:141] libmachine: (cert-options-744883) Calling .GetMachineName
	I1009 20:04:51.630294   55627 main.go:141] libmachine: (cert-options-744883) Calling .DriverName
	I1009 20:04:51.630505   55627 start.go:159] libmachine.API.Create for "cert-options-744883" (driver="kvm2")
	I1009 20:04:51.630549   55627 client.go:168] LocalClient.Create starting
	I1009 20:04:51.630578   55627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 20:04:51.630602   55627 main.go:141] libmachine: Decoding PEM data...
	I1009 20:04:51.630615   55627 main.go:141] libmachine: Parsing certificate...
	I1009 20:04:51.630657   55627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 20:04:51.630678   55627 main.go:141] libmachine: Decoding PEM data...
	I1009 20:04:51.630695   55627 main.go:141] libmachine: Parsing certificate...
	I1009 20:04:51.630715   55627 main.go:141] libmachine: Running pre-create checks...
	I1009 20:04:51.630729   55627 main.go:141] libmachine: (cert-options-744883) Calling .PreCreateCheck
	I1009 20:04:51.631157   55627 main.go:141] libmachine: (cert-options-744883) Calling .GetConfigRaw
	I1009 20:04:51.631573   55627 main.go:141] libmachine: Creating machine...
	I1009 20:04:51.631581   55627 main.go:141] libmachine: (cert-options-744883) Calling .Create
	I1009 20:04:51.631715   55627 main.go:141] libmachine: (cert-options-744883) Creating KVM machine...
	I1009 20:04:51.632998   55627 main.go:141] libmachine: (cert-options-744883) DBG | found existing default KVM network
	I1009 20:04:51.634393   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.634216   55777 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d6:14:56} reservation:<nil>}
	I1009 20:04:51.635465   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.635383   55777 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:1c:c5} reservation:<nil>}
	I1009 20:04:51.636725   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.636644   55777 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00031d090}
	I1009 20:04:51.636735   55627 main.go:141] libmachine: (cert-options-744883) DBG | created network xml: 
	I1009 20:04:51.636741   55627 main.go:141] libmachine: (cert-options-744883) DBG | <network>
	I1009 20:04:51.636745   55627 main.go:141] libmachine: (cert-options-744883) DBG |   <name>mk-cert-options-744883</name>
	I1009 20:04:51.636754   55627 main.go:141] libmachine: (cert-options-744883) DBG |   <dns enable='no'/>
	I1009 20:04:51.636757   55627 main.go:141] libmachine: (cert-options-744883) DBG |   
	I1009 20:04:51.636763   55627 main.go:141] libmachine: (cert-options-744883) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1009 20:04:51.636766   55627 main.go:141] libmachine: (cert-options-744883) DBG |     <dhcp>
	I1009 20:04:51.636771   55627 main.go:141] libmachine: (cert-options-744883) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1009 20:04:51.636774   55627 main.go:141] libmachine: (cert-options-744883) DBG |     </dhcp>
	I1009 20:04:51.636778   55627 main.go:141] libmachine: (cert-options-744883) DBG |   </ip>
	I1009 20:04:51.636782   55627 main.go:141] libmachine: (cert-options-744883) DBG |   
	I1009 20:04:51.636788   55627 main.go:141] libmachine: (cert-options-744883) DBG | </network>
	I1009 20:04:51.636793   55627 main.go:141] libmachine: (cert-options-744883) DBG | 
	I1009 20:04:51.642188   55627 main.go:141] libmachine: (cert-options-744883) DBG | trying to create private KVM network mk-cert-options-744883 192.168.61.0/24...
	I1009 20:04:51.712820   55627 main.go:141] libmachine: (cert-options-744883) DBG | private KVM network mk-cert-options-744883 192.168.61.0/24 created
	I1009 20:04:51.712835   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.712767   55777 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:04:51.712849   55627 main.go:141] libmachine: (cert-options-744883) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883 ...
	I1009 20:04:51.712873   55627 main.go:141] libmachine: (cert-options-744883) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 20:04:51.712886   55627 main.go:141] libmachine: (cert-options-744883) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 20:04:51.952666   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.952497   55777 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883/id_rsa...
	I1009 20:04:52.032186   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:52.032032   55777 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883/cert-options-744883.rawdisk...
	I1009 20:04:52.032210   55627 main.go:141] libmachine: (cert-options-744883) DBG | Writing magic tar header
	I1009 20:04:52.032279   55627 main.go:141] libmachine: (cert-options-744883) DBG | Writing SSH key tar header
	I1009 20:04:52.032313   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:52.032138   55777 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883 ...
	I1009 20:04:52.032333   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883 (perms=drwx------)
	I1009 20:04:52.032361   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 20:04:52.032370   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 20:04:52.032380   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883
	I1009 20:04:52.032391   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 20:04:52.032403   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 20:04:52.032411   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:04:52.032422   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 20:04:52.032429   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 20:04:52.032436   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins
	I1009 20:04:52.032442   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home
	I1009 20:04:52.032451   55627 main.go:141] libmachine: (cert-options-744883) DBG | Skipping /home - not owner
	I1009 20:04:52.032470   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 20:04:52.032483   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 20:04:52.032492   55627 main.go:141] libmachine: (cert-options-744883) Creating domain...
	I1009 20:04:52.033428   55627 main.go:141] libmachine: (cert-options-744883) define libvirt domain using xml: 
	I1009 20:04:52.033439   55627 main.go:141] libmachine: (cert-options-744883) <domain type='kvm'>
	I1009 20:04:52.033447   55627 main.go:141] libmachine: (cert-options-744883)   <name>cert-options-744883</name>
	I1009 20:04:52.033453   55627 main.go:141] libmachine: (cert-options-744883)   <memory unit='MiB'>2048</memory>
	I1009 20:04:52.033459   55627 main.go:141] libmachine: (cert-options-744883)   <vcpu>2</vcpu>
	I1009 20:04:52.033469   55627 main.go:141] libmachine: (cert-options-744883)   <features>
	I1009 20:04:52.033476   55627 main.go:141] libmachine: (cert-options-744883)     <acpi/>
	I1009 20:04:52.033489   55627 main.go:141] libmachine: (cert-options-744883)     <apic/>
	I1009 20:04:52.033495   55627 main.go:141] libmachine: (cert-options-744883)     <pae/>
	I1009 20:04:52.033500   55627 main.go:141] libmachine: (cert-options-744883)     
	I1009 20:04:52.033507   55627 main.go:141] libmachine: (cert-options-744883)   </features>
	I1009 20:04:52.033513   55627 main.go:141] libmachine: (cert-options-744883)   <cpu mode='host-passthrough'>
	I1009 20:04:52.033519   55627 main.go:141] libmachine: (cert-options-744883)   
	I1009 20:04:52.033524   55627 main.go:141] libmachine: (cert-options-744883)   </cpu>
	I1009 20:04:52.033530   55627 main.go:141] libmachine: (cert-options-744883)   <os>
	I1009 20:04:52.033541   55627 main.go:141] libmachine: (cert-options-744883)     <type>hvm</type>
	I1009 20:04:52.033561   55627 main.go:141] libmachine: (cert-options-744883)     <boot dev='cdrom'/>
	I1009 20:04:52.033573   55627 main.go:141] libmachine: (cert-options-744883)     <boot dev='hd'/>
	I1009 20:04:52.033582   55627 main.go:141] libmachine: (cert-options-744883)     <bootmenu enable='no'/>
	I1009 20:04:52.033587   55627 main.go:141] libmachine: (cert-options-744883)   </os>
	I1009 20:04:52.033594   55627 main.go:141] libmachine: (cert-options-744883)   <devices>
	I1009 20:04:52.033601   55627 main.go:141] libmachine: (cert-options-744883)     <disk type='file' device='cdrom'>
	I1009 20:04:52.033613   55627 main.go:141] libmachine: (cert-options-744883)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883/boot2docker.iso'/>
	I1009 20:04:52.033620   55627 main.go:141] libmachine: (cert-options-744883)       <target dev='hdc' bus='scsi'/>
	I1009 20:04:52.033627   55627 main.go:141] libmachine: (cert-options-744883)       <readonly/>
	I1009 20:04:52.033632   55627 main.go:141] libmachine: (cert-options-744883)     </disk>
	I1009 20:04:52.033663   55627 main.go:141] libmachine: (cert-options-744883)     <disk type='file' device='disk'>
	I1009 20:04:52.033678   55627 main.go:141] libmachine: (cert-options-744883)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 20:04:52.033690   55627 main.go:141] libmachine: (cert-options-744883)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883/cert-options-744883.rawdisk'/>
	I1009 20:04:52.033696   55627 main.go:141] libmachine: (cert-options-744883)       <target dev='hda' bus='virtio'/>
	I1009 20:04:52.033704   55627 main.go:141] libmachine: (cert-options-744883)     </disk>
	I1009 20:04:52.033710   55627 main.go:141] libmachine: (cert-options-744883)     <interface type='network'>
	I1009 20:04:52.033719   55627 main.go:141] libmachine: (cert-options-744883)       <source network='mk-cert-options-744883'/>
	I1009 20:04:52.033725   55627 main.go:141] libmachine: (cert-options-744883)       <model type='virtio'/>
	I1009 20:04:52.033732   55627 main.go:141] libmachine: (cert-options-744883)     </interface>
	I1009 20:04:52.033737   55627 main.go:141] libmachine: (cert-options-744883)     <interface type='network'>
	I1009 20:04:52.033744   55627 main.go:141] libmachine: (cert-options-744883)       <source network='default'/>
	I1009 20:04:52.033752   55627 main.go:141] libmachine: (cert-options-744883)       <model type='virtio'/>
	I1009 20:04:52.033760   55627 main.go:141] libmachine: (cert-options-744883)     </interface>
	I1009 20:04:52.033766   55627 main.go:141] libmachine: (cert-options-744883)     <serial type='pty'>
	I1009 20:04:52.033774   55627 main.go:141] libmachine: (cert-options-744883)       <target port='0'/>
	I1009 20:04:52.033779   55627 main.go:141] libmachine: (cert-options-744883)     </serial>
	I1009 20:04:52.033785   55627 main.go:141] libmachine: (cert-options-744883)     <console type='pty'>
	I1009 20:04:52.033791   55627 main.go:141] libmachine: (cert-options-744883)       <target type='serial' port='0'/>
	I1009 20:04:52.033807   55627 main.go:141] libmachine: (cert-options-744883)     </console>
	I1009 20:04:52.033813   55627 main.go:141] libmachine: (cert-options-744883)     <rng model='virtio'>
	I1009 20:04:52.033824   55627 main.go:141] libmachine: (cert-options-744883)       <backend model='random'>/dev/random</backend>
	I1009 20:04:52.033828   55627 main.go:141] libmachine: (cert-options-744883)     </rng>
	I1009 20:04:52.033832   55627 main.go:141] libmachine: (cert-options-744883)     
	I1009 20:04:52.033836   55627 main.go:141] libmachine: (cert-options-744883)     
	I1009 20:04:52.033840   55627 main.go:141] libmachine: (cert-options-744883)   </devices>
	I1009 20:04:52.033846   55627 main.go:141] libmachine: (cert-options-744883) </domain>
	I1009 20:04:52.033858   55627 main.go:141] libmachine: (cert-options-744883) 
	I1009 20:04:52.038059   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:07:64:1b in network default
	I1009 20:04:52.039708   55627 main.go:141] libmachine: (cert-options-744883) Ensuring networks are active...
	I1009 20:04:52.039721   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:52.040399   55627 main.go:141] libmachine: (cert-options-744883) Ensuring network default is active
	I1009 20:04:52.040760   55627 main.go:141] libmachine: (cert-options-744883) Ensuring network mk-cert-options-744883 is active
	I1009 20:04:52.041252   55627 main.go:141] libmachine: (cert-options-744883) Getting domain xml...
	I1009 20:04:52.041897   55627 main.go:141] libmachine: (cert-options-744883) Creating domain...
	I1009 20:04:53.300453   55627 main.go:141] libmachine: (cert-options-744883) Waiting to get IP...
	I1009 20:04:53.301551   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:53.302241   55627 main.go:141] libmachine: (cert-options-744883) DBG | unable to find current IP address of domain cert-options-744883 in network mk-cert-options-744883
	I1009 20:04:53.302284   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:53.302186   55777 retry.go:31] will retry after 221.768767ms: waiting for machine to come up
	I1009 20:04:53.525726   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:53.526213   55627 main.go:141] libmachine: (cert-options-744883) DBG | unable to find current IP address of domain cert-options-744883 in network mk-cert-options-744883
	I1009 20:04:53.526259   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:53.526174   55777 retry.go:31] will retry after 301.738082ms: waiting for machine to come up
	I1009 20:04:53.829705   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:53.830553   55627 main.go:141] libmachine: (cert-options-744883) DBG | unable to find current IP address of domain cert-options-744883 in network mk-cert-options-744883
	I1009 20:04:53.830589   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:53.830509   55777 retry.go:31] will retry after 344.391933ms: waiting for machine to come up
	I1009 20:04:54.176097   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:54.176592   55627 main.go:141] libmachine: (cert-options-744883) DBG | unable to find current IP address of domain cert-options-744883 in network mk-cert-options-744883
	I1009 20:04:54.176612   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:54.176560   55777 retry.go:31] will retry after 414.583923ms: waiting for machine to come up
	I1009 20:04:51.471666   55086 addons.go:510] duration metric: took 3.139919ms for enable addons: enabled=[]
	I1009 20:04:51.471723   55086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:04:51.652855   55086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:04:51.674715   55086 node_ready.go:35] waiting up to 6m0s for node "pause-739381" to be "Ready" ...
	I1009 20:04:51.677754   55086 node_ready.go:49] node "pause-739381" has status "Ready":"True"
	I1009 20:04:51.677777   55086 node_ready.go:38] duration metric: took 3.024767ms for node "pause-739381" to be "Ready" ...
	I1009 20:04:51.677788   55086 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:04:51.682845   55086 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.838159   55086 pod_ready.go:93] pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.838184   55086 pod_ready.go:82] duration metric: took 155.315678ms for pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.838194   55086 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:52.239284   55086 pod_ready.go:93] pod "etcd-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:52.239306   55086 pod_ready.go:82] duration metric: took 401.106299ms for pod "etcd-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:52.239315   55086 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:52.638301   55086 pod_ready.go:93] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:52.638330   55086 pod_ready.go:82] duration metric: took 399.007061ms for pod "kube-apiserver-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:52.638344   55086 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.038677   55086 pod_ready.go:93] pod "kube-controller-manager-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:53.038696   55086 pod_ready.go:82] duration metric: took 400.343939ms for pod "kube-controller-manager-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.038705   55086 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l9sfg" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.438705   55086 pod_ready.go:93] pod "kube-proxy-l9sfg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:53.438727   55086 pod_ready.go:82] duration metric: took 400.016957ms for pod "kube-proxy-l9sfg" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.438736   55086 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.838361   55086 pod_ready.go:93] pod "kube-scheduler-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:53.838385   55086 pod_ready.go:82] duration metric: took 399.641851ms for pod "kube-scheduler-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.838395   55086 pod_ready.go:39] duration metric: took 2.160595656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:04:53.838412   55086 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:04:53.838467   55086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:04:53.860256   55086 api_server.go:72] duration metric: took 2.391771872s to wait for apiserver process to appear ...
	I1009 20:04:53.860281   55086 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:04:53.860308   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:53.866556   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 200:
	ok
	I1009 20:04:53.867954   55086 api_server.go:141] control plane version: v1.31.1
	I1009 20:04:53.867980   55086 api_server.go:131] duration metric: took 7.68336ms to wait for apiserver health ...
	I1009 20:04:53.867989   55086 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:04:54.041454   55086 system_pods.go:59] 6 kube-system pods found
	I1009 20:04:54.041488   55086 system_pods.go:61] "coredns-7c65d6cfc9-5srcm" [0590290c-6489-499e-91ae-a553df99329f] Running
	I1009 20:04:54.041495   55086 system_pods.go:61] "etcd-pause-739381" [1b986c95-df7c-4fa0-8a45-c4cdc019e1ae] Running
	I1009 20:04:54.041501   55086 system_pods.go:61] "kube-apiserver-pause-739381" [2bfe7580-30af-43dc-a6cc-6b17f0bcc15f] Running
	I1009 20:04:54.041515   55086 system_pods.go:61] "kube-controller-manager-pause-739381" [ff13657b-af05-43a8-ad81-d82135ffe263] Running
	I1009 20:04:54.041521   55086 system_pods.go:61] "kube-proxy-l9sfg" [78c75730-3c5a-44c7-8091-65b6eb07a4f1] Running
	I1009 20:04:54.041527   55086 system_pods.go:61] "kube-scheduler-pause-739381" [c96f1eb3-5b46-4432-b769-944db5d80906] Running
	I1009 20:04:54.041534   55086 system_pods.go:74] duration metric: took 173.537991ms to wait for pod list to return data ...
	I1009 20:04:54.041546   55086 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:04:54.238997   55086 default_sa.go:45] found service account: "default"
	I1009 20:04:54.239025   55086 default_sa.go:55] duration metric: took 197.4714ms for default service account to be created ...
	I1009 20:04:54.239035   55086 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:04:54.442343   55086 system_pods.go:86] 6 kube-system pods found
	I1009 20:04:54.442387   55086 system_pods.go:89] "coredns-7c65d6cfc9-5srcm" [0590290c-6489-499e-91ae-a553df99329f] Running
	I1009 20:04:54.442397   55086 system_pods.go:89] "etcd-pause-739381" [1b986c95-df7c-4fa0-8a45-c4cdc019e1ae] Running
	I1009 20:04:54.442404   55086 system_pods.go:89] "kube-apiserver-pause-739381" [2bfe7580-30af-43dc-a6cc-6b17f0bcc15f] Running
	I1009 20:04:54.442410   55086 system_pods.go:89] "kube-controller-manager-pause-739381" [ff13657b-af05-43a8-ad81-d82135ffe263] Running
	I1009 20:04:54.442416   55086 system_pods.go:89] "kube-proxy-l9sfg" [78c75730-3c5a-44c7-8091-65b6eb07a4f1] Running
	I1009 20:04:54.442421   55086 system_pods.go:89] "kube-scheduler-pause-739381" [c96f1eb3-5b46-4432-b769-944db5d80906] Running
	I1009 20:04:54.442432   55086 system_pods.go:126] duration metric: took 203.389095ms to wait for k8s-apps to be running ...
	I1009 20:04:54.442442   55086 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:04:54.442521   55086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:04:54.461506   55086 system_svc.go:56] duration metric: took 19.054525ms WaitForService to wait for kubelet
	I1009 20:04:54.461546   55086 kubeadm.go:582] duration metric: took 2.993066862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:04:54.461590   55086 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:04:54.638797   55086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:04:54.638818   55086 node_conditions.go:123] node cpu capacity is 2
	I1009 20:04:54.638828   55086 node_conditions.go:105] duration metric: took 177.23151ms to run NodePressure ...
	I1009 20:04:54.638838   55086 start.go:241] waiting for startup goroutines ...
	I1009 20:04:54.638844   55086 start.go:246] waiting for cluster config update ...
	I1009 20:04:54.638851   55086 start.go:255] writing updated cluster config ...
	I1009 20:04:54.639169   55086 ssh_runner.go:195] Run: rm -f paused
	I1009 20:04:54.700903   55086 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:04:54.703101   55086 out.go:177] * Done! kubectl is now configured to use "pause-739381" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.460028112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504295460004308,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8a3dcb1-49d1-4f03-8b32-186b4e20f6dd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.460456039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b5cf60b-0519-4df7-aa15-42baf2349f72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.460570350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b5cf60b-0519-4df7-aa15-42baf2349f72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.460835839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728504275724102464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504272963807263,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504272954356906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc,PodSandboxId:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504270348895788,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050,PodSandboxId:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504269694839613,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a,PodSandboxId:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504269676418775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504269686953538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504269648648498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504269478857435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691,PodSandboxId:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504204344849859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c,PodSandboxId:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504192405893188,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e,PodSandboxId:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504192397256012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b5cf60b-0519-4df7-aa15-42baf2349f72 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.507346302Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b58bf49a-b045-4294-96f6-a4b57170a42f name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.507523181Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b58bf49a-b045-4294-96f6-a4b57170a42f name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.508417971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1dcbba5-c20c-4f66-9f4b-246264c252fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.508884781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504295508858959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1dcbba5-c20c-4f66-9f4b-246264c252fa name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.509409195Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=43131bc7-09b3-4dad-a8f8-9328cf180b5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.509565869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=43131bc7-09b3-4dad-a8f8-9328cf180b5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.510127775Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728504275724102464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504272963807263,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504272954356906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc,PodSandboxId:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504270348895788,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050,PodSandboxId:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504269694839613,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a,PodSandboxId:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504269676418775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504269686953538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504269648648498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504269478857435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691,PodSandboxId:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504204344849859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c,PodSandboxId:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504192405893188,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e,PodSandboxId:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504192397256012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=43131bc7-09b3-4dad-a8f8-9328cf180b5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.568452371Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=98e93ae1-a9ab-4f41-b140-e5040e9d6173 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.568692717Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=98e93ae1-a9ab-4f41-b140-e5040e9d6173 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.570961867Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d28f6afb-3ae7-4c35-b357-cd266d717ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.571998666Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504295571963275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d28f6afb-3ae7-4c35-b357-cd266d717ba8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.572829322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e3ed562-44bb-4916-b5de-0f21fb011a8f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.572954879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e3ed562-44bb-4916-b5de-0f21fb011a8f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.573310559Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728504275724102464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504272963807263,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504272954356906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc,PodSandboxId:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504270348895788,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050,PodSandboxId:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504269694839613,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a,PodSandboxId:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504269676418775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504269686953538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504269648648498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504269478857435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691,PodSandboxId:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504204344849859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c,PodSandboxId:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504192405893188,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e,PodSandboxId:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504192397256012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e3ed562-44bb-4916-b5de-0f21fb011a8f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.631017534Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e8b292f-4f1a-4665-a4f1-95737d20ec63 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.631113169Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e8b292f-4f1a-4665-a4f1-95737d20ec63 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.632958569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00ce37f8-b494-4b25-88e3-9f950b6959ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.633374628Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504295633345415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00ce37f8-b494-4b25-88e3-9f950b6959ed name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.634251311Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2a685f1-0392-4c02-862c-8268075351ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.634320857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2a685f1-0392-4c02-862c-8268075351ea name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:55 pause-739381 crio[2312]: time="2024-10-09 20:04:55.634638462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728504275724102464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504272963807263,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504272954356906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc,PodSandboxId:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504270348895788,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050,PodSandboxId:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504269694839613,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a,PodSandboxId:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504269676418775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504269686953538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504269648648498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504269478857435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691,PodSandboxId:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504204344849859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c,PodSandboxId:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504192405893188,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e,PodSandboxId:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504192397256012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2a685f1-0392-4c02-862c-8268075351ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1304217224b0f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   19 seconds ago       Running             kube-proxy                2                   7dc21a435cc40       kube-proxy-l9sfg
	53146cbeb3c65       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 seconds ago       Running             kube-apiserver            2                   f3c21f7889d1b       kube-apiserver-pause-739381
	a4a8e6d67cda3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   22 seconds ago       Running             kube-controller-manager   2                   3479b341df219       kube-controller-manager-pause-739381
	394e6971c3f78       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago       Running             coredns                   1                   8fc9d5ea4c995       coredns-7c65d6cfc9-5srcm
	1cdfbd5d9de4c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   26 seconds ago       Running             etcd                      1                   3f829ccf65c9f       etcd-pause-739381
	e350b39716de4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   26 seconds ago       Exited              kube-apiserver            1                   f3c21f7889d1b       kube-apiserver-pause-739381
	519a1f7f09a98       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   26 seconds ago       Running             kube-scheduler            1                   3fc6b41dfd531       kube-scheduler-pause-739381
	6fc70f0a3e935       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   26 seconds ago       Exited              kube-controller-manager   1                   3479b341df219       kube-controller-manager-pause-739381
	d328bf4c4cc54       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   26 seconds ago       Exited              kube-proxy                1                   7dc21a435cc40       kube-proxy-l9sfg
	321149d186b14       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   8859557c18f39       coredns-7c65d6cfc9-5srcm
	f1ae581457f0a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   f957138227560       etcd-pause-739381
	e14673f022497       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            0                   476e1e798a580       kube-scheduler-pause-739381
	
	
	==> coredns [321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1327751358]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 20:03:24.766) (total time: 30004ms):
	Trace[1327751358]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (20:03:54.769)
	Trace[1327751358]: [30.004028222s] [30.004028222s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[562036867]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 20:03:24.767) (total time: 30003ms):
	Trace[562036867]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (20:03:54.769)
	Trace[562036867]: [30.003393391s] [30.003393391s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[401059428]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 20:03:24.769) (total time: 30001ms):
	Trace[401059428]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (20:03:54.769)
	Trace[401059428]: [30.001562175s] [30.001562175s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37556 - 38589 "HINFO IN 1564325736694943915.5179022301709361024. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019869277s
	
	
	==> describe nodes <==
	Name:               pause-739381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-739381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=pause-739381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T20_03_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:03:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-739381
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:04:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:04:35 +0000   Wed, 09 Oct 2024 20:03:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:04:35 +0000   Wed, 09 Oct 2024 20:03:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:04:35 +0000   Wed, 09 Oct 2024 20:03:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:04:35 +0000   Wed, 09 Oct 2024 20:03:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.224
	  Hostname:    pause-739381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f7a4c80d18c4678a9fc7cd63565a4d4
	  System UUID:                7f7a4c80-d18c-4678-a9fc-7cd63565a4d4
	  Boot ID:                    023bdc44-ad50-4618-931f-e669a96c0c51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-5srcm                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     93s
	  kube-system                 etcd-pause-739381                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         99s
	  kube-system                 kube-apiserver-pause-739381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-pause-739381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-l9sfg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-pause-739381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         99s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 90s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientPID     99s                kubelet          Node pause-739381 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  99s                kubelet          Node pause-739381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                kubelet          Node pause-739381 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeReady                98s                kubelet          Node pause-739381 status is now: NodeReady
	  Normal  RegisteredNode           94s                node-controller  Node pause-739381 event: Registered Node pause-739381 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-739381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-739381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-739381 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-739381 event: Registered Node pause-739381 in Controller
	
	
	==> dmesg <==
	[Oct 9 20:03] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061870] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075208] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.204261] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.182648] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.328905] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.173926] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.063682] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.841345] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +1.275251] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.276727] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[  +0.090016] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.386502] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.052753] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[ +11.756946] kauditd_printk_skb: 88 callbacks suppressed
	[Oct 9 20:04] systemd-fstab-generator[2236]: Ignoring "noauto" option for root device
	[  +0.178187] systemd-fstab-generator[2248]: Ignoring "noauto" option for root device
	[  +0.190760] systemd-fstab-generator[2262]: Ignoring "noauto" option for root device
	[  +0.140170] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.300730] systemd-fstab-generator[2302]: Ignoring "noauto" option for root device
	[  +6.954820] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +0.079201] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.555973] systemd-fstab-generator[3104]: Ignoring "noauto" option for root device
	[  +3.567760] kauditd_printk_skb: 123 callbacks suppressed
	[ +15.766755] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	
	
	==> etcd [1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050] <==
	{"level":"info","ts":"2024-10-09T20:04:30.602386Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-09T20:04:30.602435Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-09T20:04:30.602444Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-09T20:04:30.603133Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.224:2380"}
	{"level":"info","ts":"2024-10-09T20:04:30.603167Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.224:2380"}
	{"level":"info","ts":"2024-10-09T20:04:30.604017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb switched to configuration voters=(717356955326611387)"}
	{"level":"info","ts":"2024-10-09T20:04:30.606690Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"949c49925e715fcf","local-member-id":"9f49059a365ffbb","added-peer-id":"9f49059a365ffbb","added-peer-peer-urls":["https://192.168.50.224:2380"]}
	{"level":"info","ts":"2024-10-09T20:04:30.607198Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"949c49925e715fcf","local-member-id":"9f49059a365ffbb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:04:30.607347Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:04:31.585848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-09T20:04:31.585903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-09T20:04:31.585928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb received MsgPreVoteResp from 9f49059a365ffbb at term 2"}
	{"level":"info","ts":"2024-10-09T20:04:31.585947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb became candidate at term 3"}
	{"level":"info","ts":"2024-10-09T20:04:31.585967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb received MsgVoteResp from 9f49059a365ffbb at term 3"}
	{"level":"info","ts":"2024-10-09T20:04:31.585979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb became leader at term 3"}
	{"level":"info","ts":"2024-10-09T20:04:31.585987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f49059a365ffbb elected leader 9f49059a365ffbb at term 3"}
	{"level":"info","ts":"2024-10-09T20:04:31.588443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:04:31.588442Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f49059a365ffbb","local-member-attributes":"{Name:pause-739381 ClientURLs:[https://192.168.50.224:2379]}","request-path":"/0/members/9f49059a365ffbb/attributes","cluster-id":"949c49925e715fcf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:04:31.588722Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:04:31.589565Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:04:31.589605Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:04:31.590132Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:04:31.591028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.224:2379"}
	{"level":"info","ts":"2024-10-09T20:04:31.591386Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:04:31.592305Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c] <==
	{"level":"info","ts":"2024-10-09T20:03:30.643146Z","caller":"traceutil/trace.go:171","msg":"trace[50539657] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-5srcm; range_end:; response_count:1; response_revision:377; }","duration":"208.530142ms","start":"2024-10-09T20:03:30.434606Z","end":"2024-10-09T20:03:30.643136Z","steps":["trace[50539657] 'agreement among raft nodes before linearized reading'  (duration: 208.333979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T20:03:30.643275Z","caller":"traceutil/trace.go:171","msg":"trace[1521933275] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"212.292353ms","start":"2024-10-09T20:03:30.430975Z","end":"2024-10-09T20:03:30.643267Z","steps":["trace[1521933275] 'process raft request'  (duration: 125.425468ms)","trace[1521933275] 'compare'  (duration: 86.328613ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T20:03:38.591928Z","caller":"traceutil/trace.go:171","msg":"trace[824177481] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"392.615004ms","start":"2024-10-09T20:03:38.199296Z","end":"2024-10-09T20:03:38.591911Z","steps":["trace[824177481] 'process raft request'  (duration: 392.457274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:03:38.592094Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-09T20:03:38.199275Z","time spent":"392.741842ms","remote":"127.0.0.1:58518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":770,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-7c65d6cfc9-5srcm.17fce16b5a3bf222\" mod_revision:375 > success:<request_put:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-5srcm.17fce16b5a3bf222\" value_size:682 lease:9204111285558372408 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-5srcm.17fce16b5a3bf222\" > >"}
	{"level":"info","ts":"2024-10-09T20:03:38.592199Z","caller":"traceutil/trace.go:171","msg":"trace[1137833378] linearizableReadLoop","detail":"{readStateIndex:399; appliedIndex:399; }","duration":"342.914248ms","start":"2024-10-09T20:03:38.249260Z","end":"2024-10-09T20:03:38.592175Z","steps":["trace[1137833378] 'read index received'  (duration: 342.905443ms)","trace[1137833378] 'applied index is now lower than readState.Index'  (duration: 7.426µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:03:38.592440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.167687ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T20:03:38.592662Z","caller":"traceutil/trace.go:171","msg":"trace[730308718] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:386; }","duration":"343.397136ms","start":"2024-10-09T20:03:38.249253Z","end":"2024-10-09T20:03:38.592650Z","steps":["trace[730308718] 'agreement among raft nodes before linearized reading'  (duration: 343.025935ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:03:38.817732Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.671449ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18427483322413148630 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-739381\" mod_revision:372 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-739381\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-739381\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-09T20:03:38.817827Z","caller":"traceutil/trace.go:171","msg":"trace[607989628] linearizableReadLoop","detail":"{readStateIndex:400; appliedIndex:399; }","duration":"225.556283ms","start":"2024-10-09T20:03:38.592259Z","end":"2024-10-09T20:03:38.817815Z","steps":["trace[607989628] 'read index received'  (duration: 41.599506ms)","trace[607989628] 'applied index is now lower than readState.Index'  (duration: 183.955614ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:03:38.817919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.497684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5srcm\" ","response":"range_response_count:1 size:5036"}
	{"level":"info","ts":"2024-10-09T20:03:38.817987Z","caller":"traceutil/trace.go:171","msg":"trace[883044233] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-5srcm; range_end:; response_count:1; response_revision:387; }","duration":"263.550183ms","start":"2024-10-09T20:03:38.554407Z","end":"2024-10-09T20:03:38.817957Z","steps":["trace[883044233] 'agreement among raft nodes before linearized reading'  (duration: 263.442547ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T20:03:38.818174Z","caller":"traceutil/trace.go:171","msg":"trace[1738024261] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"341.968512ms","start":"2024-10-09T20:03:38.476193Z","end":"2024-10-09T20:03:38.818161Z","steps":["trace[1738024261] 'process raft request'  (duration: 157.705332ms)","trace[1738024261] 'compare'  (duration: 183.507444ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:03:38.818279Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-09T20:03:38.476174Z","time spent":"342.052181ms","remote":"127.0.0.1:41030","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-739381\" mod_revision:372 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-739381\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-739381\" > >"}
	{"level":"warn","ts":"2024-10-09T20:03:39.068920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.129868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T20:03:39.068984Z","caller":"traceutil/trace.go:171","msg":"trace[609221548] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:387; }","duration":"109.211386ms","start":"2024-10-09T20:03:38.959762Z","end":"2024-10-09T20:03:39.068973Z","steps":["trace[609221548] 'range keys from in-memory index tree'  (duration: 109.017733ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T20:04:14.640071Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-09T20:04:14.640142Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-739381","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.224:2380"],"advertise-client-urls":["https://192.168.50.224:2379"]}
	{"level":"warn","ts":"2024-10-09T20:04:14.640278Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:04:14.640375Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:04:14.674361Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.224:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:04:14.674424Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.224:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-09T20:04:14.674575Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f49059a365ffbb","current-leader-member-id":"9f49059a365ffbb"}
	{"level":"info","ts":"2024-10-09T20:04:14.676927Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.224:2380"}
	{"level":"info","ts":"2024-10-09T20:04:14.677127Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.224:2380"}
	{"level":"info","ts":"2024-10-09T20:04:14.677169Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-739381","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.224:2380"],"advertise-client-urls":["https://192.168.50.224:2379"]}
	
	
	==> kernel <==
	 20:04:56 up 2 min,  0 users,  load average: 0.85, 0.27, 0.09
	Linux pause-739381 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03] <==
	I1009 20:04:35.397961       1 policy_source.go:224] refreshing policies
	I1009 20:04:35.464994       1 shared_informer.go:320] Caches are synced for configmaps
	I1009 20:04:35.465259       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:04:35.465393       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:04:35.465598       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:04:35.468852       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:04:35.468953       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 20:04:35.469170       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1009 20:04:35.469746       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1009 20:04:35.469781       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:04:35.469787       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:04:35.469791       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:04:35.469795       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:04:35.477905       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1009 20:04:35.481426       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1009 20:04:35.488367       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:04:35.491851       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1009 20:04:36.270258       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:04:36.750456       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 20:04:36.766667       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 20:04:36.798430       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 20:04:36.824310       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:04:36.830340       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:04:38.752173       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 20:04:39.052109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501] <==
	
	
	==> kube-controller-manager [6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490] <==
	
	
	==> kube-controller-manager [a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730] <==
	I1009 20:04:38.747731       1 shared_informer.go:320] Caches are synced for stateful set
	I1009 20:04:38.747814       1 shared_informer.go:320] Caches are synced for crt configmap
	I1009 20:04:38.747887       1 shared_informer.go:320] Caches are synced for node
	I1009 20:04:38.747956       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:04:38.747975       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:04:38.747979       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1009 20:04:38.747984       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1009 20:04:38.748038       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-739381"
	I1009 20:04:38.748083       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1009 20:04:38.774135       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1009 20:04:38.847067       1 shared_informer.go:320] Caches are synced for cronjob
	I1009 20:04:38.848129       1 shared_informer.go:320] Caches are synced for taint
	I1009 20:04:38.848231       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 20:04:38.848363       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-739381"
	I1009 20:04:38.848454       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 20:04:38.861971       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 20:04:38.887381       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 20:04:38.953696       1 shared_informer.go:320] Caches are synced for namespace
	I1009 20:04:38.997973       1 shared_informer.go:320] Caches are synced for service account
	I1009 20:04:39.397902       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 20:04:39.398026       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:04:39.400366       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 20:04:41.398384       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="49.202363ms"
	I1009 20:04:41.422762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="21.580678ms"
	I1009 20:04:41.422841       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.215µs"
	
	
	==> kube-proxy [1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:04:35.860759       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:04:35.870235       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.224"]
	E1009 20:04:35.870426       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:04:35.904088       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:04:35.904171       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:04:35.904207       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:04:35.906723       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:04:35.907440       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:04:35.907538       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:04:35.909337       1 config.go:199] "Starting service config controller"
	I1009 20:04:35.909403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:04:35.909450       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:04:35.909536       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:04:35.910034       1 config.go:328] "Starting node config controller"
	I1009 20:04:35.910091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:04:36.009889       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 20:04:36.009904       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:04:36.010242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02] <==
	I1009 20:04:30.489560       1 server_linux.go:66] "Using iptables proxy"
	
	
	==> kube-scheduler [519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a] <==
	W1009 20:04:35.298905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:04:35.299045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.299274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 20:04:35.299345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.299520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 20:04:35.299586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.299723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:04:35.300236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.299916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 20:04:35.300380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.300104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 20:04:35.300529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.300217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:04:35.300636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.300873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:04:35.301574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.301778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:04:35.301817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.309764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:04:35.315677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.315647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:04:35.320756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.367981       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:04:35.368035       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1009 20:04:39.861000       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e] <==
	W1009 20:03:15.323529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:03:15.323557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:15.323572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 20:03:15.323580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1009 20:03:15.323601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.269185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:03:16.269238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.322762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:03:16.322830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.334152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:03:16.334319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.383532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:03:16.383588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.391539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 20:03:16.391664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.423638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:03:16.423801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.446352       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:03:16.446982       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 20:03:16.480787       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 20:03:16.482373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.583153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:03:16.583209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1009 20:03:19.112263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 20:04:14.635763       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659300    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/370e60aa81d0f70e1e47c24fa6206480-kubeconfig\") pod \"kube-controller-manager-pause-739381\" (UID: \"370e60aa81d0f70e1e47c24fa6206480\") " pod="kube-system/kube-controller-manager-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659314    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/e08b7c39cc7d517a460612bcd55e5b12-etcd-certs\") pod \"etcd-pause-739381\" (UID: \"e08b7c39cc7d517a460612bcd55e5b12\") " pod="kube-system/etcd-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659329    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/e08b7c39cc7d517a460612bcd55e5b12-etcd-data\") pod \"etcd-pause-739381\" (UID: \"e08b7c39cc7d517a460612bcd55e5b12\") " pod="kube-system/etcd-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659349    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68a6b2df97c53dcc73f338337700b206-ca-certs\") pod \"kube-apiserver-pause-739381\" (UID: \"68a6b2df97c53dcc73f338337700b206\") " pod="kube-system/kube-apiserver-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659385    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/370e60aa81d0f70e1e47c24fa6206480-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-739381\" (UID: \"370e60aa81d0f70e1e47c24fa6206480\") " pod="kube-system/kube-controller-manager-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.823136    3111 kubelet_node_status.go:72] "Attempting to register node" node="pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: E1009 20:04:32.824233    3111 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.224:8443: connect: connection refused" node="pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.934586    3111 scope.go:117] "RemoveContainer" containerID="e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.936598    3111 scope.go:117] "RemoveContainer" containerID="6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490"
	Oct 09 20:04:33 pause-739381 kubelet[3111]: E1009 20:04:33.051176    3111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-739381?timeout=10s\": dial tcp 192.168.50.224:8443: connect: connection refused" interval="800ms"
	Oct 09 20:04:33 pause-739381 kubelet[3111]: I1009 20:04:33.225454    3111 kubelet_node_status.go:72] "Attempting to register node" node="pause-739381"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.407980    3111 apiserver.go:52] "Watching apiserver"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.446443    3111 kubelet_node_status.go:111] "Node was previously registered" node="pause-739381"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.446677    3111 kubelet_node_status.go:75] "Successfully registered node" node="pause-739381"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.446736    3111 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.447771    3111 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.452993    3111 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.478566    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78c75730-3c5a-44c7-8091-65b6eb07a4f1-xtables-lock\") pod \"kube-proxy-l9sfg\" (UID: \"78c75730-3c5a-44c7-8091-65b6eb07a4f1\") " pod="kube-system/kube-proxy-l9sfg"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.479163    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78c75730-3c5a-44c7-8091-65b6eb07a4f1-lib-modules\") pod \"kube-proxy-l9sfg\" (UID: \"78c75730-3c5a-44c7-8091-65b6eb07a4f1\") " pod="kube-system/kube-proxy-l9sfg"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: E1009 20:04:35.628783    3111 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-739381\" already exists" pod="kube-system/kube-apiserver-pause-739381"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.713867    3111 scope.go:117] "RemoveContainer" containerID="d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02"
	Oct 09 20:04:42 pause-739381 kubelet[3111]: E1009 20:04:42.528675    3111 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504282527523473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:04:42 pause-739381 kubelet[3111]: E1009 20:04:42.528706    3111 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504282527523473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:04:52 pause-739381 kubelet[3111]: E1009 20:04:52.533176    3111 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504292532859637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:04:52 pause-739381 kubelet[3111]: E1009 20:04:52.533244    3111 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504292532859637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-739381 -n pause-739381
helpers_test.go:261: (dbg) Run:  kubectl --context pause-739381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-739381 -n pause-739381
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-739381 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-739381 logs -n 25: (4.039143101s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 19:59 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 19:59 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 19:59 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 19:59 UTC | 09 Oct 24 19:59 UTC |
	|         | --cancel-scheduled                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC |                     |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC | 09 Oct 24 20:00 UTC |
	|         | --schedule 15s                     |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-000241           | scheduled-stop-000241     | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC | 09 Oct 24 20:00 UTC |
	| start   | -p kubernetes-upgrade-790037       | kubernetes-upgrade-790037 | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC |                     |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0       |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p offline-crio-035060             | offline-crio-035060       | jenkins | v1.34.0 | 09 Oct 24 20:00 UTC | 09 Oct 24 20:02 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048                 |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-111682          | minikube                  | jenkins | v1.26.0 | 09 Oct 24 20:01 UTC | 09 Oct 24 20:02 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| start   | -p running-upgrade-200546          | minikube                  | jenkins | v1.26.0 | 09 Oct 24 20:01 UTC | 09 Oct 24 20:03 UTC |
	|         | --memory=2200 --vm-driver=kvm2     |                           |         |         |                     |                     |
	|         |  --container-runtime=crio          |                           |         |         |                     |                     |
	| delete  | -p offline-crio-035060             | offline-crio-035060       | jenkins | v1.34.0 | 09 Oct 24 20:02 UTC | 09 Oct 24 20:02 UTC |
	| start   | -p pause-739381 --memory=2048      | pause-739381              | jenkins | v1.34.0 | 09 Oct 24 20:02 UTC | 09 Oct 24 20:04 UTC |
	|         | --install-addons=false             |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-111682 stop        | minikube                  | jenkins | v1.26.0 | 09 Oct 24 20:02 UTC | 09 Oct 24 20:02 UTC |
	| start   | -p stopped-upgrade-111682          | stopped-upgrade-111682    | jenkins | v1.34.0 | 09 Oct 24 20:02 UTC | 09 Oct 24 20:03 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p running-upgrade-200546          | running-upgrade-200546    | jenkins | v1.34.0 | 09 Oct 24 20:03 UTC | 09 Oct 24 20:04 UTC |
	|         | --memory=2200                      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-111682          | stopped-upgrade-111682    | jenkins | v1.34.0 | 09 Oct 24 20:03 UTC | 09 Oct 24 20:03 UTC |
	| start   | -p force-systemd-flag-499844       | force-systemd-flag-499844 | jenkins | v1.34.0 | 09 Oct 24 20:03 UTC | 09 Oct 24 20:04 UTC |
	|         | --memory=2048 --force-systemd      |                           |         |         |                     |                     |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| start   | -p pause-739381                    | pause-739381              | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC | 09 Oct 24 20:04 UTC |
	|         | --alsologtostderr                  |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                 |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-200546          | running-upgrade-200546    | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC | 09 Oct 24 20:04 UTC |
	| start   | -p cert-expiration-261596          | cert-expiration-261596    | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --cert-expiration=3m               |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-499844 ssh cat  | force-systemd-flag-499844 | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC | 09 Oct 24 20:04 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-499844       | force-systemd-flag-499844 | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC | 09 Oct 24 20:04 UTC |
	| start   | -p cert-options-744883             | cert-options-744883       | jenkins | v1.34.0 | 09 Oct 24 20:04 UTC |                     |
	|         | --memory=2048                      |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1          |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15      |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost        |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com   |                           |         |         |                     |                     |
	|         | --apiserver-port=8555              |                           |         |         |                     |                     |
	|         | --driver=kvm2                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio           |                           |         |         |                     |                     |
	|---------|------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:04:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:04:34.293812   55627 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:04:34.293894   55627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:04:34.293897   55627 out.go:358] Setting ErrFile to fd 2...
	I1009 20:04:34.293900   55627 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:04:34.294568   55627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:04:34.295472   55627 out.go:352] Setting JSON to false
	I1009 20:04:34.296894   55627 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6415,"bootTime":1728497859,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:04:34.297017   55627 start.go:139] virtualization: kvm guest
	I1009 20:04:34.299026   55627 out.go:177] * [cert-options-744883] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:04:34.300246   55627 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:04:34.300253   55627 notify.go:220] Checking for updates...
	I1009 20:04:34.301369   55627 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:04:34.302546   55627 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:04:34.303867   55627 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:04:34.305037   55627 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:04:34.306112   55627 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:04:34.307864   55627 config.go:182] Loaded profile config "cert-expiration-261596": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:34.308020   55627 config.go:182] Loaded profile config "kubernetes-upgrade-790037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:04:34.308206   55627 config.go:182] Loaded profile config "pause-739381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:34.308346   55627 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:04:34.351889   55627 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 20:04:34.353103   55627 start.go:297] selected driver: kvm2
	I1009 20:04:34.353111   55627 start.go:901] validating driver "kvm2" against <nil>
	I1009 20:04:34.353122   55627 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:04:34.354075   55627 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:04:34.354163   55627 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:04:34.371891   55627 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:04:34.371925   55627 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 20:04:34.372161   55627 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 20:04:34.372184   55627 cni.go:84] Creating CNI manager for ""
	I1009 20:04:34.372224   55627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:04:34.372229   55627 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 20:04:34.372270   55627 start.go:340] cluster config:
	{Name:cert-options-744883 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:cert-options-744883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1009 20:04:34.372344   55627 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:04:34.373947   55627 out.go:177] * Starting "cert-options-744883" primary control-plane node in "cert-options-744883" cluster
	I1009 20:04:30.715743   55086 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:04:30.763554   55086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:04:30.781804   55086 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Oct  9 20:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Oct  9 20:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Oct  9 20:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Oct  9 20:03 /etc/kubernetes/scheduler.conf
	
	I1009 20:04:30.781879   55086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:04:30.796144   55086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:04:30.809776   55086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:04:30.823771   55086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:04:30.823834   55086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:04:30.837350   55086 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:04:30.847683   55086 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:04:30.847739   55086 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:04:30.858007   55086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:04:30.868625   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:30.947704   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:32.043556   55086 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.095816091s)
	I1009 20:04:32.043587   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:32.317877   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:32.418205   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:32.543187   55086 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:04:32.543267   55086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:04:33.044243   55086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:04:33.543775   55086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:04:33.561808   55086 api_server.go:72] duration metric: took 1.018620525s to wait for apiserver process to appear ...
	I1009 20:04:33.561833   55086 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:04:33.561853   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:35.291167   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:04:35.291201   55086 api_server.go:103] status: https://192.168.50.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:04:35.291215   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:35.383034   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:04:35.383078   55086 api_server.go:103] status: https://192.168.50.224:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:04:35.562463   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:35.566772   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:04:35.566799   55086 api_server.go:103] status: https://192.168.50.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:04:36.062341   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:36.070427   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:04:36.070461   55086 api_server.go:103] status: https://192.168.50.224:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:04:36.562091   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:36.567693   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 200:
	ok
	I1009 20:04:36.574752   55086 api_server.go:141] control plane version: v1.31.1
	I1009 20:04:36.574781   55086 api_server.go:131] duration metric: took 3.012939649s to wait for apiserver health ...
	I1009 20:04:36.574791   55086 cni.go:84] Creating CNI manager for ""
	I1009 20:04:36.574800   55086 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:04:36.576825   55086 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:04:34.099995   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:34.100556   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:34.100589   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:34.100515   55382 retry.go:31] will retry after 905.323907ms: waiting for machine to come up
	I1009 20:04:35.007709   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:35.008155   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:35.008177   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:35.008103   55382 retry.go:31] will retry after 1.250762936s: waiting for machine to come up
	I1009 20:04:36.260161   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:36.260667   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:36.260682   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:36.260618   55382 retry.go:31] will retry after 1.632979014s: waiting for machine to come up
	I1009 20:04:37.895157   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:37.895622   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:37.895645   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:37.895546   55382 retry.go:31] will retry after 1.925863332s: waiting for machine to come up
	I1009 20:04:34.375045   55627 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:04:34.375088   55627 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 20:04:34.375103   55627 cache.go:56] Caching tarball of preloaded images
	I1009 20:04:34.375165   55627 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:04:34.375171   55627 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 20:04:34.375256   55627 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/cert-options-744883/config.json ...
	I1009 20:04:34.375268   55627 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/cert-options-744883/config.json: {Name:mkd13da49e06018a60f9bc49685ab7c9d04458e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:34.375383   55627 start.go:360] acquireMachinesLock for cert-options-744883: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:04:36.578379   55086 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:04:36.589904   55086 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:04:36.609178   55086 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:04:36.609258   55086 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1009 20:04:36.609291   55086 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1009 20:04:36.624900   55086 system_pods.go:59] 6 kube-system pods found
	I1009 20:04:36.624945   55086 system_pods.go:61] "coredns-7c65d6cfc9-5srcm" [0590290c-6489-499e-91ae-a553df99329f] Running
	I1009 20:04:36.624957   55086 system_pods.go:61] "etcd-pause-739381" [1b986c95-df7c-4fa0-8a45-c4cdc019e1ae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:04:36.624967   55086 system_pods.go:61] "kube-apiserver-pause-739381" [2bfe7580-30af-43dc-a6cc-6b17f0bcc15f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:04:36.624976   55086 system_pods.go:61] "kube-controller-manager-pause-739381" [ff13657b-af05-43a8-ad81-d82135ffe263] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:04:36.624994   55086 system_pods.go:61] "kube-proxy-l9sfg" [78c75730-3c5a-44c7-8091-65b6eb07a4f1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:04:36.625002   55086 system_pods.go:61] "kube-scheduler-pause-739381" [c96f1eb3-5b46-4432-b769-944db5d80906] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:04:36.625014   55086 system_pods.go:74] duration metric: took 15.814216ms to wait for pod list to return data ...
	I1009 20:04:36.625025   55086 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:04:36.628881   55086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:04:36.628911   55086 node_conditions.go:123] node cpu capacity is 2
	I1009 20:04:36.628924   55086 node_conditions.go:105] duration metric: took 3.891005ms to run NodePressure ...
	I1009 20:04:36.628945   55086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:04:36.913141   55086 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:04:36.917671   55086 kubeadm.go:739] kubelet initialised
	I1009 20:04:36.917689   55086 kubeadm.go:740] duration metric: took 4.524911ms waiting for restarted kubelet to initialise ...
	I1009 20:04:36.917697   55086 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:04:36.922288   55086 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:36.927168   55086 pod_ready.go:93] pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:36.927184   55086 pod_ready.go:82] duration metric: took 4.876329ms for pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:36.927191   55086 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:38.933809   55086 pod_ready.go:103] pod "etcd-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:39.823474   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:39.824049   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:39.824069   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:39.823996   55382 retry.go:31] will retry after 2.675328453s: waiting for machine to come up
	I1009 20:04:42.500440   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:42.500928   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:42.500947   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:42.500860   55382 retry.go:31] will retry after 3.920094446s: waiting for machine to come up
	I1009 20:04:40.935183   55086 pod_ready.go:103] pod "etcd-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:41.933752   55086 pod_ready.go:93] pod "etcd-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:41.933774   55086 pod_ready.go:82] duration metric: took 5.006577279s for pod "etcd-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:41.933782   55086 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:43.939788   55086 pod_ready.go:103] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:46.423599   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:46.423988   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find current IP address of domain cert-expiration-261596 in network mk-cert-expiration-261596
	I1009 20:04:46.424003   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | I1009 20:04:46.423934   55382 retry.go:31] will retry after 3.644416129s: waiting for machine to come up
	I1009 20:04:45.940628   55086 pod_ready.go:103] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:48.440021   55086 pod_ready.go:103] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:50.440211   55086 pod_ready.go:103] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"False"
	I1009 20:04:51.439838   55086 pod_ready.go:93] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.439862   55086 pod_ready.go:82] duration metric: took 9.506073682s for pod "kube-apiserver-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.439872   55086 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.445184   55086 pod_ready.go:93] pod "kube-controller-manager-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.445203   55086 pod_ready.go:82] duration metric: took 5.3254ms for pod "kube-controller-manager-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.445213   55086 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l9sfg" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.449938   55086 pod_ready.go:93] pod "kube-proxy-l9sfg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.449960   55086 pod_ready.go:82] duration metric: took 4.739191ms for pod "kube-proxy-l9sfg" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.449971   55086 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.453845   55086 pod_ready.go:93] pod "kube-scheduler-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.453863   55086 pod_ready.go:82] duration metric: took 3.885625ms for pod "kube-scheduler-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.453870   55086 pod_ready.go:39] duration metric: took 14.536165355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:04:51.453884   55086 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:04:51.467466   55086 ops.go:34] apiserver oom_adj: -16
	I1009 20:04:51.467484   55086 kubeadm.go:597] duration metric: took 21.797230687s to restartPrimaryControlPlane
	I1009 20:04:51.467493   55086 kubeadm.go:394] duration metric: took 22.136557539s to StartCluster
	I1009 20:04:51.467511   55086 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:51.467582   55086 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:04:51.468246   55086 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:04:51.468456   55086 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.224 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:04:51.468532   55086 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:04:51.468763   55086 config.go:182] Loaded profile config "pause-739381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:51.470476   55086 out.go:177] * Enabled addons: 
	I1009 20:04:51.470493   55086 out.go:177] * Verifying Kubernetes components...
	I1009 20:04:51.609352   55627 start.go:364] duration metric: took 17.23395098s to acquireMachinesLock for "cert-options-744883"
	I1009 20:04:51.609419   55627 start.go:93] Provisioning new machine with config: &{Name:cert-options-744883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:cert-options-744883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:04:51.609520   55627 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 20:04:50.070259   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.070732   55359 main.go:141] libmachine: (cert-expiration-261596) Found IP for machine: 192.168.72.252
	I1009 20:04:50.070750   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has current primary IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.070757   55359 main.go:141] libmachine: (cert-expiration-261596) Reserving static IP address...
	I1009 20:04:50.071164   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | unable to find host DHCP lease matching {name: "cert-expiration-261596", mac: "52:54:00:fe:e5:2b", ip: "192.168.72.252"} in network mk-cert-expiration-261596
	I1009 20:04:50.144156   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | Getting to WaitForSSH function...
	I1009 20:04:50.144177   55359 main.go:141] libmachine: (cert-expiration-261596) Reserved static IP address: 192.168.72.252
	I1009 20:04:50.144188   55359 main.go:141] libmachine: (cert-expiration-261596) Waiting for SSH to be available...
	I1009 20:04:50.146893   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.147425   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.147444   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.147573   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | Using SSH client type: external
	I1009 20:04:50.147595   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa (-rw-------)
	I1009 20:04:50.147621   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:04:50.147628   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | About to run SSH command:
	I1009 20:04:50.147639   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | exit 0
	I1009 20:04:50.275185   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | SSH cmd err, output: <nil>: 
	I1009 20:04:50.275426   55359 main.go:141] libmachine: (cert-expiration-261596) KVM machine creation complete!
	I1009 20:04:50.275712   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetConfigRaw
	I1009 20:04:50.276500   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:50.276692   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:50.276841   55359 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 20:04:50.276848   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetState
	I1009 20:04:50.278306   55359 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 20:04:50.278315   55359 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 20:04:50.278320   55359 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 20:04:50.278327   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.280512   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.280889   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.280909   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.281063   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.281217   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.281352   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.281459   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.281574   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:50.281794   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:50.281802   55359 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 20:04:50.398207   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:04:50.398222   55359 main.go:141] libmachine: Detecting the provisioner...
	I1009 20:04:50.398231   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.400821   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.401118   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.401140   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.401257   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.401443   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.401595   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.401718   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.401898   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:50.402057   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:50.402064   55359 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 20:04:50.515656   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 20:04:50.515698   55359 main.go:141] libmachine: found compatible host: buildroot
	I1009 20:04:50.515706   55359 main.go:141] libmachine: Provisioning with buildroot...
	I1009 20:04:50.515714   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetMachineName
	I1009 20:04:50.515919   55359 buildroot.go:166] provisioning hostname "cert-expiration-261596"
	I1009 20:04:50.515935   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetMachineName
	I1009 20:04:50.516085   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.518779   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.519120   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.519134   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.519278   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.519407   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.519525   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.519649   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.519793   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:50.519962   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:50.519970   55359 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-261596 && echo "cert-expiration-261596" | sudo tee /etc/hostname
	I1009 20:04:50.645969   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-261596
	
	I1009 20:04:50.645990   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.648398   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.648677   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.648692   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.648845   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.648996   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.649149   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.649235   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.649342   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:50.649516   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:50.649527   55359 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-261596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-261596/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-261596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:04:50.772242   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:04:50.772260   55359 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:04:50.772287   55359 buildroot.go:174] setting up certificates
	I1009 20:04:50.772300   55359 provision.go:84] configureAuth start
	I1009 20:04:50.772307   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetMachineName
	I1009 20:04:50.772570   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetIP
	I1009 20:04:50.774988   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.775320   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.775343   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.775529   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.777608   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.777850   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.777887   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.777962   55359 provision.go:143] copyHostCerts
	I1009 20:04:50.778026   55359 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:04:50.778040   55359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:04:50.778125   55359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:04:50.778237   55359 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:04:50.778242   55359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:04:50.778270   55359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:04:50.778322   55359 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:04:50.778325   55359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:04:50.778351   55359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:04:50.778389   55359 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-261596 san=[127.0.0.1 192.168.72.252 cert-expiration-261596 localhost minikube]
	I1009 20:04:50.959144   55359 provision.go:177] copyRemoteCerts
	I1009 20:04:50.959188   55359 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:04:50.959208   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:50.961460   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.961721   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:50.961744   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:50.961856   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:50.962002   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:50.962110   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:50.962232   55359 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa Username:docker}
	I1009 20:04:51.049790   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:04:51.073151   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:04:51.096490   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:04:51.119748   55359 provision.go:87] duration metric: took 347.437378ms to configureAuth
	I1009 20:04:51.119782   55359 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:04:51.119953   55359 config.go:182] Loaded profile config "cert-expiration-261596": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:04:51.120024   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.122389   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.122689   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.122708   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.122816   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.122995   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.123145   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.123244   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.123403   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:51.123559   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:51.123578   55359 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:04:51.348235   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:04:51.348253   55359 main.go:141] libmachine: Checking connection to Docker...
	I1009 20:04:51.348263   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetURL
	I1009 20:04:51.349446   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | Using libvirt version 6000000
	I1009 20:04:51.351384   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.351659   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.351673   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.351774   55359 main.go:141] libmachine: Docker is up and running!
	I1009 20:04:51.351780   55359 main.go:141] libmachine: Reticulating splines...
	I1009 20:04:51.351801   55359 client.go:171] duration metric: took 23.068631162s to LocalClient.Create
	I1009 20:04:51.351825   55359 start.go:167] duration metric: took 23.068700541s to libmachine.API.Create "cert-expiration-261596"
	I1009 20:04:51.351832   55359 start.go:293] postStartSetup for "cert-expiration-261596" (driver="kvm2")
	I1009 20:04:51.351845   55359 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:04:51.351861   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.352073   55359 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:04:51.352096   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.354274   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.354570   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.354589   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.354712   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.354850   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.354981   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.355112   55359 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa Username:docker}
	I1009 20:04:51.442260   55359 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:04:51.447217   55359 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:04:51.447231   55359 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:04:51.447304   55359 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:04:51.447401   55359 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:04:51.447519   55359 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:04:51.457780   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:04:51.481923   55359 start.go:296] duration metric: took 130.079198ms for postStartSetup
	I1009 20:04:51.481983   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetConfigRaw
	I1009 20:04:51.482542   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetIP
	I1009 20:04:51.485188   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.485571   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.485586   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.485821   55359 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/cert-expiration-261596/config.json ...
	I1009 20:04:51.485998   55359 start.go:128] duration metric: took 23.225277895s to createHost
	I1009 20:04:51.486013   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.488219   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.488544   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.488564   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.488677   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.488831   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.488970   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.489093   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.489193   55359 main.go:141] libmachine: Using SSH client type: native
	I1009 20:04:51.489359   55359 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.252 22 <nil> <nil>}
	I1009 20:04:51.489366   55359 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:04:51.609223   55359 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728504291.580727681
	
	I1009 20:04:51.609237   55359 fix.go:216] guest clock: 1728504291.580727681
	I1009 20:04:51.609242   55359 fix.go:229] Guest: 2024-10-09 20:04:51.580727681 +0000 UTC Remote: 2024-10-09 20:04:51.486003182 +0000 UTC m=+23.350340866 (delta=94.724499ms)
	I1009 20:04:51.609258   55359 fix.go:200] guest clock delta is within tolerance: 94.724499ms
	I1009 20:04:51.609263   55359 start.go:83] releasing machines lock for "cert-expiration-261596", held for 23.348597963s
	I1009 20:04:51.609284   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.609525   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetIP
	I1009 20:04:51.612292   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.612676   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.612711   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.612873   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.613420   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.613611   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .DriverName
	I1009 20:04:51.613717   55359 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:04:51.613754   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.613783   55359 ssh_runner.go:195] Run: cat /version.json
	I1009 20:04:51.613818   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHHostname
	I1009 20:04:51.616666   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.616779   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.617011   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.617030   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.617059   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:51.617069   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:51.617189   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.617299   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHPort
	I1009 20:04:51.617375   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.617535   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHKeyPath
	I1009 20:04:51.617568   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.617708   55359 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa Username:docker}
	I1009 20:04:51.617733   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetSSHUsername
	I1009 20:04:51.617856   55359 sshutil.go:53] new ssh client: &{IP:192.168.72.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-expiration-261596/id_rsa Username:docker}
	I1009 20:04:51.701154   55359 ssh_runner.go:195] Run: systemctl --version
	I1009 20:04:51.730983   55359 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:04:51.892798   55359 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:04:51.899046   55359 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:04:51.899124   55359 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:04:51.916404   55359 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:04:51.916416   55359 start.go:495] detecting cgroup driver to use...
	I1009 20:04:51.916478   55359 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:04:51.932484   55359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:04:51.946899   55359 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:04:51.946949   55359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:04:51.961076   55359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:04:51.973989   55359 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:04:52.089075   55359 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:04:52.256748   55359 docker.go:233] disabling docker service ...
	I1009 20:04:52.256815   55359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:04:52.271043   55359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:04:52.284014   55359 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:04:52.399764   55359 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:04:52.524440   55359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:04:52.537971   55359 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:04:52.557961   55359 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:04:52.558023   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.569398   55359 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:04:52.569447   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.580615   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.590915   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.600869   55359 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:04:52.611020   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.620983   55359 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.638042   55359 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:04:52.648382   55359 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:04:52.661102   55359 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:04:52.661134   55359 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:04:52.676152   55359 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:04:52.687352   55359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:04:52.803648   55359 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:04:52.902646   55359 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:04:52.902731   55359 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:04:52.907431   55359 start.go:563] Will wait 60s for crictl version
	I1009 20:04:52.907484   55359 ssh_runner.go:195] Run: which crictl
	I1009 20:04:52.911226   55359 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:04:52.951000   55359 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:04:52.951092   55359 ssh_runner.go:195] Run: crio --version
	I1009 20:04:52.980122   55359 ssh_runner.go:195] Run: crio --version
	I1009 20:04:53.011437   55359 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:04:53.012667   55359 main.go:141] libmachine: (cert-expiration-261596) Calling .GetIP
	I1009 20:04:53.015594   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:53.015932   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:e5:2b", ip: ""} in network mk-cert-expiration-261596: {Iface:virbr4 ExpiryTime:2024-10-09 21:04:43 +0000 UTC Type:0 Mac:52:54:00:fe:e5:2b Iaid: IPaddr:192.168.72.252 Prefix:24 Hostname:cert-expiration-261596 Clientid:01:52:54:00:fe:e5:2b}
	I1009 20:04:53.015952   55359 main.go:141] libmachine: (cert-expiration-261596) DBG | domain cert-expiration-261596 has defined IP address 192.168.72.252 and MAC address 52:54:00:fe:e5:2b in network mk-cert-expiration-261596
	I1009 20:04:53.016134   55359 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 20:04:53.020217   55359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:04:53.032394   55359 kubeadm.go:883] updating cluster {Name:cert-expiration-261596 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.1 ClusterName:cert-expiration-261596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.252 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:04:53.032905   55359 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:04:53.033000   55359 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:04:53.070007   55359 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:04:53.070058   55359 ssh_runner.go:195] Run: which lz4
	I1009 20:04:53.074090   55359 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:04:53.078434   55359 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:04:53.078452   55359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:04:51.611574   55627 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1009 20:04:51.611748   55627 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:04:51.611796   55627 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:04:51.628680   55627 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I1009 20:04:51.629036   55627 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:04:51.629610   55627 main.go:141] libmachine: Using API Version  1
	I1009 20:04:51.629624   55627 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:04:51.629977   55627 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:04:51.630157   55627 main.go:141] libmachine: (cert-options-744883) Calling .GetMachineName
	I1009 20:04:51.630294   55627 main.go:141] libmachine: (cert-options-744883) Calling .DriverName
	I1009 20:04:51.630505   55627 start.go:159] libmachine.API.Create for "cert-options-744883" (driver="kvm2")
	I1009 20:04:51.630549   55627 client.go:168] LocalClient.Create starting
	I1009 20:04:51.630578   55627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 20:04:51.630602   55627 main.go:141] libmachine: Decoding PEM data...
	I1009 20:04:51.630615   55627 main.go:141] libmachine: Parsing certificate...
	I1009 20:04:51.630657   55627 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 20:04:51.630678   55627 main.go:141] libmachine: Decoding PEM data...
	I1009 20:04:51.630695   55627 main.go:141] libmachine: Parsing certificate...
	I1009 20:04:51.630715   55627 main.go:141] libmachine: Running pre-create checks...
	I1009 20:04:51.630729   55627 main.go:141] libmachine: (cert-options-744883) Calling .PreCreateCheck
	I1009 20:04:51.631157   55627 main.go:141] libmachine: (cert-options-744883) Calling .GetConfigRaw
	I1009 20:04:51.631573   55627 main.go:141] libmachine: Creating machine...
	I1009 20:04:51.631581   55627 main.go:141] libmachine: (cert-options-744883) Calling .Create
	I1009 20:04:51.631715   55627 main.go:141] libmachine: (cert-options-744883) Creating KVM machine...
	I1009 20:04:51.632998   55627 main.go:141] libmachine: (cert-options-744883) DBG | found existing default KVM network
	I1009 20:04:51.634393   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.634216   55777 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d6:14:56} reservation:<nil>}
	I1009 20:04:51.635465   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.635383   55777 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fd:1c:c5} reservation:<nil>}
	I1009 20:04:51.636725   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.636644   55777 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00031d090}
	I1009 20:04:51.636735   55627 main.go:141] libmachine: (cert-options-744883) DBG | created network xml: 
	I1009 20:04:51.636741   55627 main.go:141] libmachine: (cert-options-744883) DBG | <network>
	I1009 20:04:51.636745   55627 main.go:141] libmachine: (cert-options-744883) DBG |   <name>mk-cert-options-744883</name>
	I1009 20:04:51.636754   55627 main.go:141] libmachine: (cert-options-744883) DBG |   <dns enable='no'/>
	I1009 20:04:51.636757   55627 main.go:141] libmachine: (cert-options-744883) DBG |   
	I1009 20:04:51.636763   55627 main.go:141] libmachine: (cert-options-744883) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1009 20:04:51.636766   55627 main.go:141] libmachine: (cert-options-744883) DBG |     <dhcp>
	I1009 20:04:51.636771   55627 main.go:141] libmachine: (cert-options-744883) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1009 20:04:51.636774   55627 main.go:141] libmachine: (cert-options-744883) DBG |     </dhcp>
	I1009 20:04:51.636778   55627 main.go:141] libmachine: (cert-options-744883) DBG |   </ip>
	I1009 20:04:51.636782   55627 main.go:141] libmachine: (cert-options-744883) DBG |   
	I1009 20:04:51.636788   55627 main.go:141] libmachine: (cert-options-744883) DBG | </network>
	I1009 20:04:51.636793   55627 main.go:141] libmachine: (cert-options-744883) DBG | 
	I1009 20:04:51.642188   55627 main.go:141] libmachine: (cert-options-744883) DBG | trying to create private KVM network mk-cert-options-744883 192.168.61.0/24...
	I1009 20:04:51.712820   55627 main.go:141] libmachine: (cert-options-744883) DBG | private KVM network mk-cert-options-744883 192.168.61.0/24 created
	I1009 20:04:51.712835   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.712767   55777 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:04:51.712849   55627 main.go:141] libmachine: (cert-options-744883) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883 ...
	I1009 20:04:51.712873   55627 main.go:141] libmachine: (cert-options-744883) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 20:04:51.712886   55627 main.go:141] libmachine: (cert-options-744883) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 20:04:51.952666   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:51.952497   55777 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883/id_rsa...
	I1009 20:04:52.032186   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:52.032032   55777 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883/cert-options-744883.rawdisk...
	I1009 20:04:52.032210   55627 main.go:141] libmachine: (cert-options-744883) DBG | Writing magic tar header
	I1009 20:04:52.032279   55627 main.go:141] libmachine: (cert-options-744883) DBG | Writing SSH key tar header
	I1009 20:04:52.032313   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:52.032138   55777 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883 ...
	I1009 20:04:52.032333   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883 (perms=drwx------)
	I1009 20:04:52.032361   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 20:04:52.032370   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 20:04:52.032380   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883
	I1009 20:04:52.032391   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 20:04:52.032403   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 20:04:52.032411   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:04:52.032422   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 20:04:52.032429   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 20:04:52.032436   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home/jenkins
	I1009 20:04:52.032442   55627 main.go:141] libmachine: (cert-options-744883) DBG | Checking permissions on dir: /home
	I1009 20:04:52.032451   55627 main.go:141] libmachine: (cert-options-744883) DBG | Skipping /home - not owner
	I1009 20:04:52.032470   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 20:04:52.032483   55627 main.go:141] libmachine: (cert-options-744883) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 20:04:52.032492   55627 main.go:141] libmachine: (cert-options-744883) Creating domain...
	I1009 20:04:52.033428   55627 main.go:141] libmachine: (cert-options-744883) define libvirt domain using xml: 
	I1009 20:04:52.033439   55627 main.go:141] libmachine: (cert-options-744883) <domain type='kvm'>
	I1009 20:04:52.033447   55627 main.go:141] libmachine: (cert-options-744883)   <name>cert-options-744883</name>
	I1009 20:04:52.033453   55627 main.go:141] libmachine: (cert-options-744883)   <memory unit='MiB'>2048</memory>
	I1009 20:04:52.033459   55627 main.go:141] libmachine: (cert-options-744883)   <vcpu>2</vcpu>
	I1009 20:04:52.033469   55627 main.go:141] libmachine: (cert-options-744883)   <features>
	I1009 20:04:52.033476   55627 main.go:141] libmachine: (cert-options-744883)     <acpi/>
	I1009 20:04:52.033489   55627 main.go:141] libmachine: (cert-options-744883)     <apic/>
	I1009 20:04:52.033495   55627 main.go:141] libmachine: (cert-options-744883)     <pae/>
	I1009 20:04:52.033500   55627 main.go:141] libmachine: (cert-options-744883)     
	I1009 20:04:52.033507   55627 main.go:141] libmachine: (cert-options-744883)   </features>
	I1009 20:04:52.033513   55627 main.go:141] libmachine: (cert-options-744883)   <cpu mode='host-passthrough'>
	I1009 20:04:52.033519   55627 main.go:141] libmachine: (cert-options-744883)   
	I1009 20:04:52.033524   55627 main.go:141] libmachine: (cert-options-744883)   </cpu>
	I1009 20:04:52.033530   55627 main.go:141] libmachine: (cert-options-744883)   <os>
	I1009 20:04:52.033541   55627 main.go:141] libmachine: (cert-options-744883)     <type>hvm</type>
	I1009 20:04:52.033561   55627 main.go:141] libmachine: (cert-options-744883)     <boot dev='cdrom'/>
	I1009 20:04:52.033573   55627 main.go:141] libmachine: (cert-options-744883)     <boot dev='hd'/>
	I1009 20:04:52.033582   55627 main.go:141] libmachine: (cert-options-744883)     <bootmenu enable='no'/>
	I1009 20:04:52.033587   55627 main.go:141] libmachine: (cert-options-744883)   </os>
	I1009 20:04:52.033594   55627 main.go:141] libmachine: (cert-options-744883)   <devices>
	I1009 20:04:52.033601   55627 main.go:141] libmachine: (cert-options-744883)     <disk type='file' device='cdrom'>
	I1009 20:04:52.033613   55627 main.go:141] libmachine: (cert-options-744883)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883/boot2docker.iso'/>
	I1009 20:04:52.033620   55627 main.go:141] libmachine: (cert-options-744883)       <target dev='hdc' bus='scsi'/>
	I1009 20:04:52.033627   55627 main.go:141] libmachine: (cert-options-744883)       <readonly/>
	I1009 20:04:52.033632   55627 main.go:141] libmachine: (cert-options-744883)     </disk>
	I1009 20:04:52.033663   55627 main.go:141] libmachine: (cert-options-744883)     <disk type='file' device='disk'>
	I1009 20:04:52.033678   55627 main.go:141] libmachine: (cert-options-744883)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 20:04:52.033690   55627 main.go:141] libmachine: (cert-options-744883)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/cert-options-744883/cert-options-744883.rawdisk'/>
	I1009 20:04:52.033696   55627 main.go:141] libmachine: (cert-options-744883)       <target dev='hda' bus='virtio'/>
	I1009 20:04:52.033704   55627 main.go:141] libmachine: (cert-options-744883)     </disk>
	I1009 20:04:52.033710   55627 main.go:141] libmachine: (cert-options-744883)     <interface type='network'>
	I1009 20:04:52.033719   55627 main.go:141] libmachine: (cert-options-744883)       <source network='mk-cert-options-744883'/>
	I1009 20:04:52.033725   55627 main.go:141] libmachine: (cert-options-744883)       <model type='virtio'/>
	I1009 20:04:52.033732   55627 main.go:141] libmachine: (cert-options-744883)     </interface>
	I1009 20:04:52.033737   55627 main.go:141] libmachine: (cert-options-744883)     <interface type='network'>
	I1009 20:04:52.033744   55627 main.go:141] libmachine: (cert-options-744883)       <source network='default'/>
	I1009 20:04:52.033752   55627 main.go:141] libmachine: (cert-options-744883)       <model type='virtio'/>
	I1009 20:04:52.033760   55627 main.go:141] libmachine: (cert-options-744883)     </interface>
	I1009 20:04:52.033766   55627 main.go:141] libmachine: (cert-options-744883)     <serial type='pty'>
	I1009 20:04:52.033774   55627 main.go:141] libmachine: (cert-options-744883)       <target port='0'/>
	I1009 20:04:52.033779   55627 main.go:141] libmachine: (cert-options-744883)     </serial>
	I1009 20:04:52.033785   55627 main.go:141] libmachine: (cert-options-744883)     <console type='pty'>
	I1009 20:04:52.033791   55627 main.go:141] libmachine: (cert-options-744883)       <target type='serial' port='0'/>
	I1009 20:04:52.033807   55627 main.go:141] libmachine: (cert-options-744883)     </console>
	I1009 20:04:52.033813   55627 main.go:141] libmachine: (cert-options-744883)     <rng model='virtio'>
	I1009 20:04:52.033824   55627 main.go:141] libmachine: (cert-options-744883)       <backend model='random'>/dev/random</backend>
	I1009 20:04:52.033828   55627 main.go:141] libmachine: (cert-options-744883)     </rng>
	I1009 20:04:52.033832   55627 main.go:141] libmachine: (cert-options-744883)     
	I1009 20:04:52.033836   55627 main.go:141] libmachine: (cert-options-744883)     
	I1009 20:04:52.033840   55627 main.go:141] libmachine: (cert-options-744883)   </devices>
	I1009 20:04:52.033846   55627 main.go:141] libmachine: (cert-options-744883) </domain>
	I1009 20:04:52.033858   55627 main.go:141] libmachine: (cert-options-744883) 
	I1009 20:04:52.038059   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:07:64:1b in network default
	I1009 20:04:52.039708   55627 main.go:141] libmachine: (cert-options-744883) Ensuring networks are active...
	I1009 20:04:52.039721   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:52.040399   55627 main.go:141] libmachine: (cert-options-744883) Ensuring network default is active
	I1009 20:04:52.040760   55627 main.go:141] libmachine: (cert-options-744883) Ensuring network mk-cert-options-744883 is active
	I1009 20:04:52.041252   55627 main.go:141] libmachine: (cert-options-744883) Getting domain xml...
	I1009 20:04:52.041897   55627 main.go:141] libmachine: (cert-options-744883) Creating domain...
	I1009 20:04:53.300453   55627 main.go:141] libmachine: (cert-options-744883) Waiting to get IP...
	I1009 20:04:53.301551   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:53.302241   55627 main.go:141] libmachine: (cert-options-744883) DBG | unable to find current IP address of domain cert-options-744883 in network mk-cert-options-744883
	I1009 20:04:53.302284   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:53.302186   55777 retry.go:31] will retry after 221.768767ms: waiting for machine to come up
	I1009 20:04:53.525726   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:53.526213   55627 main.go:141] libmachine: (cert-options-744883) DBG | unable to find current IP address of domain cert-options-744883 in network mk-cert-options-744883
	I1009 20:04:53.526259   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:53.526174   55777 retry.go:31] will retry after 301.738082ms: waiting for machine to come up
	I1009 20:04:53.829705   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:53.830553   55627 main.go:141] libmachine: (cert-options-744883) DBG | unable to find current IP address of domain cert-options-744883 in network mk-cert-options-744883
	I1009 20:04:53.830589   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:53.830509   55777 retry.go:31] will retry after 344.391933ms: waiting for machine to come up
	I1009 20:04:54.176097   55627 main.go:141] libmachine: (cert-options-744883) DBG | domain cert-options-744883 has defined MAC address 52:54:00:d6:f7:22 in network mk-cert-options-744883
	I1009 20:04:54.176592   55627 main.go:141] libmachine: (cert-options-744883) DBG | unable to find current IP address of domain cert-options-744883 in network mk-cert-options-744883
	I1009 20:04:54.176612   55627 main.go:141] libmachine: (cert-options-744883) DBG | I1009 20:04:54.176560   55777 retry.go:31] will retry after 414.583923ms: waiting for machine to come up
	I1009 20:04:51.471666   55086 addons.go:510] duration metric: took 3.139919ms for enable addons: enabled=[]
	I1009 20:04:51.471723   55086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:04:51.652855   55086 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:04:51.674715   55086 node_ready.go:35] waiting up to 6m0s for node "pause-739381" to be "Ready" ...
	I1009 20:04:51.677754   55086 node_ready.go:49] node "pause-739381" has status "Ready":"True"
	I1009 20:04:51.677777   55086 node_ready.go:38] duration metric: took 3.024767ms for node "pause-739381" to be "Ready" ...
	I1009 20:04:51.677788   55086 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:04:51.682845   55086 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.838159   55086 pod_ready.go:93] pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:51.838184   55086 pod_ready.go:82] duration metric: took 155.315678ms for pod "coredns-7c65d6cfc9-5srcm" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:51.838194   55086 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:52.239284   55086 pod_ready.go:93] pod "etcd-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:52.239306   55086 pod_ready.go:82] duration metric: took 401.106299ms for pod "etcd-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:52.239315   55086 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:52.638301   55086 pod_ready.go:93] pod "kube-apiserver-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:52.638330   55086 pod_ready.go:82] duration metric: took 399.007061ms for pod "kube-apiserver-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:52.638344   55086 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.038677   55086 pod_ready.go:93] pod "kube-controller-manager-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:53.038696   55086 pod_ready.go:82] duration metric: took 400.343939ms for pod "kube-controller-manager-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.038705   55086 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l9sfg" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.438705   55086 pod_ready.go:93] pod "kube-proxy-l9sfg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:53.438727   55086 pod_ready.go:82] duration metric: took 400.016957ms for pod "kube-proxy-l9sfg" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.438736   55086 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.838361   55086 pod_ready.go:93] pod "kube-scheduler-pause-739381" in "kube-system" namespace has status "Ready":"True"
	I1009 20:04:53.838385   55086 pod_ready.go:82] duration metric: took 399.641851ms for pod "kube-scheduler-pause-739381" in "kube-system" namespace to be "Ready" ...
	I1009 20:04:53.838395   55086 pod_ready.go:39] duration metric: took 2.160595656s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:04:53.838412   55086 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:04:53.838467   55086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:04:53.860256   55086 api_server.go:72] duration metric: took 2.391771872s to wait for apiserver process to appear ...
	I1009 20:04:53.860281   55086 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:04:53.860308   55086 api_server.go:253] Checking apiserver healthz at https://192.168.50.224:8443/healthz ...
	I1009 20:04:53.866556   55086 api_server.go:279] https://192.168.50.224:8443/healthz returned 200:
	ok
	I1009 20:04:53.867954   55086 api_server.go:141] control plane version: v1.31.1
	I1009 20:04:53.867980   55086 api_server.go:131] duration metric: took 7.68336ms to wait for apiserver health ...
	I1009 20:04:53.867989   55086 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:04:54.041454   55086 system_pods.go:59] 6 kube-system pods found
	I1009 20:04:54.041488   55086 system_pods.go:61] "coredns-7c65d6cfc9-5srcm" [0590290c-6489-499e-91ae-a553df99329f] Running
	I1009 20:04:54.041495   55086 system_pods.go:61] "etcd-pause-739381" [1b986c95-df7c-4fa0-8a45-c4cdc019e1ae] Running
	I1009 20:04:54.041501   55086 system_pods.go:61] "kube-apiserver-pause-739381" [2bfe7580-30af-43dc-a6cc-6b17f0bcc15f] Running
	I1009 20:04:54.041515   55086 system_pods.go:61] "kube-controller-manager-pause-739381" [ff13657b-af05-43a8-ad81-d82135ffe263] Running
	I1009 20:04:54.041521   55086 system_pods.go:61] "kube-proxy-l9sfg" [78c75730-3c5a-44c7-8091-65b6eb07a4f1] Running
	I1009 20:04:54.041527   55086 system_pods.go:61] "kube-scheduler-pause-739381" [c96f1eb3-5b46-4432-b769-944db5d80906] Running
	I1009 20:04:54.041534   55086 system_pods.go:74] duration metric: took 173.537991ms to wait for pod list to return data ...
	I1009 20:04:54.041546   55086 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:04:54.238997   55086 default_sa.go:45] found service account: "default"
	I1009 20:04:54.239025   55086 default_sa.go:55] duration metric: took 197.4714ms for default service account to be created ...
	I1009 20:04:54.239035   55086 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:04:54.442343   55086 system_pods.go:86] 6 kube-system pods found
	I1009 20:04:54.442387   55086 system_pods.go:89] "coredns-7c65d6cfc9-5srcm" [0590290c-6489-499e-91ae-a553df99329f] Running
	I1009 20:04:54.442397   55086 system_pods.go:89] "etcd-pause-739381" [1b986c95-df7c-4fa0-8a45-c4cdc019e1ae] Running
	I1009 20:04:54.442404   55086 system_pods.go:89] "kube-apiserver-pause-739381" [2bfe7580-30af-43dc-a6cc-6b17f0bcc15f] Running
	I1009 20:04:54.442410   55086 system_pods.go:89] "kube-controller-manager-pause-739381" [ff13657b-af05-43a8-ad81-d82135ffe263] Running
	I1009 20:04:54.442416   55086 system_pods.go:89] "kube-proxy-l9sfg" [78c75730-3c5a-44c7-8091-65b6eb07a4f1] Running
	I1009 20:04:54.442421   55086 system_pods.go:89] "kube-scheduler-pause-739381" [c96f1eb3-5b46-4432-b769-944db5d80906] Running
	I1009 20:04:54.442432   55086 system_pods.go:126] duration metric: took 203.389095ms to wait for k8s-apps to be running ...
	I1009 20:04:54.442442   55086 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:04:54.442521   55086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:04:54.461506   55086 system_svc.go:56] duration metric: took 19.054525ms WaitForService to wait for kubelet
	I1009 20:04:54.461546   55086 kubeadm.go:582] duration metric: took 2.993066862s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:04:54.461590   55086 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:04:54.638797   55086 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:04:54.638818   55086 node_conditions.go:123] node cpu capacity is 2
	I1009 20:04:54.638828   55086 node_conditions.go:105] duration metric: took 177.23151ms to run NodePressure ...
	I1009 20:04:54.638838   55086 start.go:241] waiting for startup goroutines ...
	I1009 20:04:54.638844   55086 start.go:246] waiting for cluster config update ...
	I1009 20:04:54.638851   55086 start.go:255] writing updated cluster config ...
	I1009 20:04:54.639169   55086 ssh_runner.go:195] Run: rm -f paused
	I1009 20:04:54.700903   55086 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:04:54.703101   55086 out.go:177] * Done! kubectl is now configured to use "pause-739381" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.704384172Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5srcm,Uid:0590290c-6489-499e-91ae-a553df99329f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728504269383217102,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T20:03:23.342994520Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-739381,Uid:68a6b2df97c53dcc73f338337700b206,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1728504269222819756,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.224:8443,kubernetes.io/config.hash: 68a6b2df97c53dcc73f338337700b206,kubernetes.io/config.seen: 2024-10-09T20:03:17.782795629Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&PodSandboxMetadata{Name:etcd-pause-739381,Uid:e08b7c39cc7d517a460612bcd55e5b12,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728504269217644925,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.224:2379,kubernetes.io/config.hash: e08b7c39cc7d517a460612bcd55e5b12,kubernetes.io/config.seen: 2024-10-09T20:03:17.782790819Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-739381,Uid:370e60aa81d0f70e1e47c24fa6206480,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728504269211921636,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 370e60aa81d0f70e1e47c24fa6206480,kubernetes.io/config.seen: 2024-10-09T
20:03:17.782797364Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-739381,Uid:ea7c79faf70efe91c56ff9bd8c98183d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728504269207922049,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea7c79faf70efe91c56ff9bd8c98183d,kubernetes.io/config.seen: 2024-10-09T20:03:17.782798736Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&PodSandboxMetadata{Name:kube-proxy-l9sfg,Uid:78c75730-3c5a-44c7-8091-65b6eb07a4f1,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1728504269181222677,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T20:03:23.043269075Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-5srcm,Uid:0590290c-6489-499e-91ae-a553df99329f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728504203658241684,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io
/config.seen: 2024-10-09T20:03:23.342994520Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:420646fc80a28df79f7d611e5859cdd431a8fd6e42781e5a0c47fbd6e0779bdc,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-96llt,Uid:1689d6f4-aee0-4d7b-a4ae-c5915172ebdf,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728504203638953093,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-96llt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1689d6f4-aee0-4d7b-a4ae-c5915172ebdf,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-09T20:03:23.309819515Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&PodSandboxMetadata{Name:etcd-pause-739381,Uid:e08b7c39cc7d517a460612bcd55e5b12,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728504192178358208,Lab
els:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.224:2379,kubernetes.io/config.hash: e08b7c39cc7d517a460612bcd55e5b12,kubernetes.io/config.seen: 2024-10-09T20:03:11.686426393Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-739381,Uid:ea7c79faf70efe91c56ff9bd8c98183d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728504192172110707,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,tier: contr
ol-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea7c79faf70efe91c56ff9bd8c98183d,kubernetes.io/config.seen: 2024-10-09T20:03:11.686425092Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7d515914-a486-49ca-ab98-37dc1f82a8fc name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.705281259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3815cbe1-de40-40e3-9a6d-4ed8dedd0358 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.705336070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3815cbe1-de40-40e3-9a6d-4ed8dedd0358 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.705618041Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728504275724102464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504272963807263,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504272954356906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc,PodSandboxId:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504270348895788,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050,PodSandboxId:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504269694839613,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a,PodSandboxId:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504269676418775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504269686953538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504269648648498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504269478857435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691,PodSandboxId:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504204344849859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c,PodSandboxId:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504192405893188,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e,PodSandboxId:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504192397256012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3815cbe1-de40-40e3-9a6d-4ed8dedd0358 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.736294993Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e39e70c9-f5ce-4757-b784-af058014fa74 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.736365376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e39e70c9-f5ce-4757-b784-af058014fa74 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.737592527Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d58fd28-df3d-4de3-9f60-7e6907513f1f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.737936977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504297737916442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d58fd28-df3d-4de3-9f60-7e6907513f1f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.738364049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7acd8e5a-0a3a-44d6-8d28-f7f3b3bb5e41 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.738419109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7acd8e5a-0a3a-44d6-8d28-f7f3b3bb5e41 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.738731984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728504275724102464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504272963807263,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504272954356906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc,PodSandboxId:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504270348895788,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050,PodSandboxId:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504269694839613,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a,PodSandboxId:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504269676418775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504269686953538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504269648648498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504269478857435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691,PodSandboxId:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504204344849859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c,PodSandboxId:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504192405893188,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e,PodSandboxId:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504192397256012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7acd8e5a-0a3a-44d6-8d28-f7f3b3bb5e41 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.786824094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12927c18-4bed-4e7e-ae4c-1d74cf0d73aa name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.786899142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12927c18-4bed-4e7e-ae4c-1d74cf0d73aa name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.788141338Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f8b584e-b1c1-4ffe-a6fa-adc1377e61fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.788710480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504297788685077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f8b584e-b1c1-4ffe-a6fa-adc1377e61fd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.789157620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2a7c71f-8d23-44a2-8f57-d2967d8c0a45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.789214111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2a7c71f-8d23-44a2-8f57-d2967d8c0a45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.789512495Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728504275724102464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504272963807263,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504272954356906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc,PodSandboxId:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504270348895788,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050,PodSandboxId:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504269694839613,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a,PodSandboxId:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504269676418775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504269686953538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504269648648498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504269478857435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691,PodSandboxId:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504204344849859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c,PodSandboxId:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504192405893188,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e,PodSandboxId:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504192397256012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2a7c71f-8d23-44a2-8f57-d2967d8c0a45 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.835970822Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06be1e86-231f-49ad-97f1-cf651b1f2d5a name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.836043116Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06be1e86-231f-49ad-97f1-cf651b1f2d5a name=/runtime.v1.RuntimeService/Version
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.837437875Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=463667c6-9ad5-49b3-9e33-2cf5c869925c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.837951721Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504297837926184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=463667c6-9ad5-49b3-9e33-2cf5c869925c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.838670072Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c5555685-ec09-4f01-8c6f-de5e95f04be1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.838719037Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c5555685-ec09-4f01-8c6f-de5e95f04be1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:04:57 pause-739381 crio[2312]: time="2024-10-09 20:04:57.838948785Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728504275724102464,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728504272963807263,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728504272954356906,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc,PodSandboxId:8fc9d5ea4c9958ee47e9a34a8794ad98cca4ada7e2d7aa46c9af6841d35a551b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728504270348895788,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050,PodSandboxId:3f829ccf65c9f4c6368253ad6e43fdc8cc2f5dbd0ec0c5cad4dac47ac80868bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728504269694839613,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:ma
p[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a,PodSandboxId:3fc6b41dfd531a5445a7a7053b82bdabe71ba58692e921be2e5b09edbebed50b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728504269676418775,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io
.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501,PodSandboxId:f3c21f7889d1b8dbd2d4b066b3b378dd7fc1763f6e6796ff88066fc6691289d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728504269686953538,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68a6b2df97c53dcc73f338337700b206,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490,PodSandboxId:3479b341df219e2964499673d0761a89d2f7ea85693eaa711d5f12ea9da92095,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728504269648648498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370e60aa81d0f70e1e47c24fa6206480,},Annotations:map[string]string{io.kubernetes
.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02,PodSandboxId:7dc21a435cc4058fdaab2872c111d24112480009196623ff1c2726c456ca129f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728504269478857435,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l9sfg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78c75730-3c5a-44c7-8091-65b6eb07a4f1,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kuber
netes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691,PodSandboxId:8859557c18f39e430f12cb5a27261f7fe249e483915ad933ee36c7a7af60cc45,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728504204344849859,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-5srcm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0590290c-6489-499e-91ae-a553df99329f,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\
":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c,PodSandboxId:f9571382275609bb1832179116b0af69c4540709728211e9b458eb42e0d075e4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728504192405893188,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-739381,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: e08b7c39cc7d517a460612bcd55e5b12,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e,PodSandboxId:476e1e798a580604d517d39bd69961060bde9d5751cc18fd3f34284500a65909,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728504192397256012,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-739381,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ea7c79faf70efe91c56ff9bd8c98183d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c5555685-ec09-4f01-8c6f-de5e95f04be1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1304217224b0f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   22 seconds ago       Running             kube-proxy                2                   7dc21a435cc40       kube-proxy-l9sfg
	53146cbeb3c65       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   24 seconds ago       Running             kube-apiserver            2                   f3c21f7889d1b       kube-apiserver-pause-739381
	a4a8e6d67cda3       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   24 seconds ago       Running             kube-controller-manager   2                   3479b341df219       kube-controller-manager-pause-739381
	394e6971c3f78       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   27 seconds ago       Running             coredns                   1                   8fc9d5ea4c995       coredns-7c65d6cfc9-5srcm
	1cdfbd5d9de4c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   28 seconds ago       Running             etcd                      1                   3f829ccf65c9f       etcd-pause-739381
	e350b39716de4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   28 seconds ago       Exited              kube-apiserver            1                   f3c21f7889d1b       kube-apiserver-pause-739381
	519a1f7f09a98       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   28 seconds ago       Running             kube-scheduler            1                   3fc6b41dfd531       kube-scheduler-pause-739381
	6fc70f0a3e935       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   28 seconds ago       Exited              kube-controller-manager   1                   3479b341df219       kube-controller-manager-pause-739381
	d328bf4c4cc54       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   28 seconds ago       Exited              kube-proxy                1                   7dc21a435cc40       kube-proxy-l9sfg
	321149d186b14       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   8859557c18f39       coredns-7c65d6cfc9-5srcm
	f1ae581457f0a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   f957138227560       etcd-pause-739381
	e14673f022497       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   About a minute ago   Exited              kube-scheduler            0                   476e1e798a580       kube-scheduler-pause-739381
	
	
	==> coredns [321149d186b14dd7cbcfcd57a6cfb9309669f8cfda1910c63c72598993e7e691] <==
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1327751358]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 20:03:24.766) (total time: 30004ms):
	Trace[1327751358]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (20:03:54.769)
	Trace[1327751358]: [30.004028222s] [30.004028222s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[562036867]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 20:03:24.767) (total time: 30003ms):
	Trace[562036867]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (20:03:54.769)
	Trace[562036867]: [30.003393391s] [30.003393391s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[401059428]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (09-Oct-2024 20:03:24.769) (total time: 30001ms):
	Trace[401059428]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (20:03:54.769)
	Trace[401059428]: [30.001562175s] [30.001562175s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [394e6971c3f78ff01aca32410a457d351f7b38b35d5b2b639212a249e627f0dc] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37556 - 38589 "HINFO IN 1564325736694943915.5179022301709361024. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019869277s
	
	
	==> describe nodes <==
	Name:               pause-739381
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-739381
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=pause-739381
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T20_03_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:03:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-739381
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:04:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:04:35 +0000   Wed, 09 Oct 2024 20:03:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:04:35 +0000   Wed, 09 Oct 2024 20:03:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:04:35 +0000   Wed, 09 Oct 2024 20:03:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:04:35 +0000   Wed, 09 Oct 2024 20:03:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.224
	  Hostname:    pause-739381
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f7a4c80d18c4678a9fc7cd63565a4d4
	  System UUID:                7f7a4c80-d18c-4678-a9fc-7cd63565a4d4
	  Boot ID:                    023bdc44-ad50-4618-931f-e669a96c0c51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-5srcm                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     95s
	  kube-system                 etcd-pause-739381                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         101s
	  kube-system                 kube-apiserver-pause-739381             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-pause-739381    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-l9sfg                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-scheduler-pause-739381             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 93s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientPID     101s               kubelet          Node pause-739381 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  101s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  101s               kubelet          Node pause-739381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s               kubelet          Node pause-739381 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 101s               kubelet          Starting kubelet.
	  Normal  NodeReady                100s               kubelet          Node pause-739381 status is now: NodeReady
	  Normal  RegisteredNode           96s                node-controller  Node pause-739381 event: Registered Node pause-739381 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-739381 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-739381 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-739381 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-739381 event: Registered Node pause-739381 in Controller
	
	
	==> dmesg <==
	[Oct 9 20:03] systemd-fstab-generator[590]: Ignoring "noauto" option for root device
	[  +0.061870] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075208] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.204261] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.182648] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.328905] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.173926] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +0.063682] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.841345] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +1.275251] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.276727] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[  +0.090016] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.386502] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.052753] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[ +11.756946] kauditd_printk_skb: 88 callbacks suppressed
	[Oct 9 20:04] systemd-fstab-generator[2236]: Ignoring "noauto" option for root device
	[  +0.178187] systemd-fstab-generator[2248]: Ignoring "noauto" option for root device
	[  +0.190760] systemd-fstab-generator[2262]: Ignoring "noauto" option for root device
	[  +0.140170] systemd-fstab-generator[2274]: Ignoring "noauto" option for root device
	[  +0.300730] systemd-fstab-generator[2302]: Ignoring "noauto" option for root device
	[  +6.954820] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +0.079201] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.555973] systemd-fstab-generator[3104]: Ignoring "noauto" option for root device
	[  +3.567760] kauditd_printk_skb: 123 callbacks suppressed
	[ +15.766755] systemd-fstab-generator[3446]: Ignoring "noauto" option for root device
	
	
	==> etcd [1cdfbd5d9de4c60c091325960907bc6641b052a210f0d3001b32eba3f3ed0050] <==
	{"level":"info","ts":"2024-10-09T20:04:30.602386Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-09T20:04:30.602435Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-09T20:04:30.602444Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-09T20:04:30.603133Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.224:2380"}
	{"level":"info","ts":"2024-10-09T20:04:30.603167Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.224:2380"}
	{"level":"info","ts":"2024-10-09T20:04:30.604017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb switched to configuration voters=(717356955326611387)"}
	{"level":"info","ts":"2024-10-09T20:04:30.606690Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"949c49925e715fcf","local-member-id":"9f49059a365ffbb","added-peer-id":"9f49059a365ffbb","added-peer-peer-urls":["https://192.168.50.224:2380"]}
	{"level":"info","ts":"2024-10-09T20:04:30.607198Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"949c49925e715fcf","local-member-id":"9f49059a365ffbb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:04:30.607347Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:04:31.585848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-09T20:04:31.585903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-09T20:04:31.585928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb received MsgPreVoteResp from 9f49059a365ffbb at term 2"}
	{"level":"info","ts":"2024-10-09T20:04:31.585947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb became candidate at term 3"}
	{"level":"info","ts":"2024-10-09T20:04:31.585967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb received MsgVoteResp from 9f49059a365ffbb at term 3"}
	{"level":"info","ts":"2024-10-09T20:04:31.585979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f49059a365ffbb became leader at term 3"}
	{"level":"info","ts":"2024-10-09T20:04:31.585987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f49059a365ffbb elected leader 9f49059a365ffbb at term 3"}
	{"level":"info","ts":"2024-10-09T20:04:31.588443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:04:31.588442Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f49059a365ffbb","local-member-attributes":"{Name:pause-739381 ClientURLs:[https://192.168.50.224:2379]}","request-path":"/0/members/9f49059a365ffbb/attributes","cluster-id":"949c49925e715fcf","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:04:31.588722Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:04:31.589565Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:04:31.589605Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:04:31.590132Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:04:31.591028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.224:2379"}
	{"level":"info","ts":"2024-10-09T20:04:31.591386Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:04:31.592305Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [f1ae581457f0ab0be50ee9790d3740ea9e1ca8083fe630288e75b243f908dd3c] <==
	{"level":"info","ts":"2024-10-09T20:03:30.643146Z","caller":"traceutil/trace.go:171","msg":"trace[50539657] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-5srcm; range_end:; response_count:1; response_revision:377; }","duration":"208.530142ms","start":"2024-10-09T20:03:30.434606Z","end":"2024-10-09T20:03:30.643136Z","steps":["trace[50539657] 'agreement among raft nodes before linearized reading'  (duration: 208.333979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T20:03:30.643275Z","caller":"traceutil/trace.go:171","msg":"trace[1521933275] transaction","detail":"{read_only:false; response_revision:377; number_of_response:1; }","duration":"212.292353ms","start":"2024-10-09T20:03:30.430975Z","end":"2024-10-09T20:03:30.643267Z","steps":["trace[1521933275] 'process raft request'  (duration: 125.425468ms)","trace[1521933275] 'compare'  (duration: 86.328613ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T20:03:38.591928Z","caller":"traceutil/trace.go:171","msg":"trace[824177481] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"392.615004ms","start":"2024-10-09T20:03:38.199296Z","end":"2024-10-09T20:03:38.591911Z","steps":["trace[824177481] 'process raft request'  (duration: 392.457274ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:03:38.592094Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-09T20:03:38.199275Z","time spent":"392.741842ms","remote":"127.0.0.1:58518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":770,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-7c65d6cfc9-5srcm.17fce16b5a3bf222\" mod_revision:375 > success:<request_put:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-5srcm.17fce16b5a3bf222\" value_size:682 lease:9204111285558372408 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-7c65d6cfc9-5srcm.17fce16b5a3bf222\" > >"}
	{"level":"info","ts":"2024-10-09T20:03:38.592199Z","caller":"traceutil/trace.go:171","msg":"trace[1137833378] linearizableReadLoop","detail":"{readStateIndex:399; appliedIndex:399; }","duration":"342.914248ms","start":"2024-10-09T20:03:38.249260Z","end":"2024-10-09T20:03:38.592175Z","steps":["trace[1137833378] 'read index received'  (duration: 342.905443ms)","trace[1137833378] 'applied index is now lower than readState.Index'  (duration: 7.426µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:03:38.592440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.167687ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T20:03:38.592662Z","caller":"traceutil/trace.go:171","msg":"trace[730308718] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:386; }","duration":"343.397136ms","start":"2024-10-09T20:03:38.249253Z","end":"2024-10-09T20:03:38.592650Z","steps":["trace[730308718] 'agreement among raft nodes before linearized reading'  (duration: 343.025935ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:03:38.817732Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.671449ms","expected-duration":"100ms","prefix":"","request":"header:<ID:18427483322413148630 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-739381\" mod_revision:372 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-739381\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-739381\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-09T20:03:38.817827Z","caller":"traceutil/trace.go:171","msg":"trace[607989628] linearizableReadLoop","detail":"{readStateIndex:400; appliedIndex:399; }","duration":"225.556283ms","start":"2024-10-09T20:03:38.592259Z","end":"2024-10-09T20:03:38.817815Z","steps":["trace[607989628] 'read index received'  (duration: 41.599506ms)","trace[607989628] 'applied index is now lower than readState.Index'  (duration: 183.955614ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:03:38.817919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.497684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-5srcm\" ","response":"range_response_count:1 size:5036"}
	{"level":"info","ts":"2024-10-09T20:03:38.817987Z","caller":"traceutil/trace.go:171","msg":"trace[883044233] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-5srcm; range_end:; response_count:1; response_revision:387; }","duration":"263.550183ms","start":"2024-10-09T20:03:38.554407Z","end":"2024-10-09T20:03:38.817957Z","steps":["trace[883044233] 'agreement among raft nodes before linearized reading'  (duration: 263.442547ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T20:03:38.818174Z","caller":"traceutil/trace.go:171","msg":"trace[1738024261] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"341.968512ms","start":"2024-10-09T20:03:38.476193Z","end":"2024-10-09T20:03:38.818161Z","steps":["trace[1738024261] 'process raft request'  (duration: 157.705332ms)","trace[1738024261] 'compare'  (duration: 183.507444ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:03:38.818279Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-09T20:03:38.476174Z","time spent":"342.052181ms","remote":"127.0.0.1:41030","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-739381\" mod_revision:372 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-739381\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-739381\" > >"}
	{"level":"warn","ts":"2024-10-09T20:03:39.068920Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.129868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T20:03:39.068984Z","caller":"traceutil/trace.go:171","msg":"trace[609221548] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:387; }","duration":"109.211386ms","start":"2024-10-09T20:03:38.959762Z","end":"2024-10-09T20:03:39.068973Z","steps":["trace[609221548] 'range keys from in-memory index tree'  (duration: 109.017733ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T20:04:14.640071Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-09T20:04:14.640142Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"pause-739381","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.224:2380"],"advertise-client-urls":["https://192.168.50.224:2379"]}
	{"level":"warn","ts":"2024-10-09T20:04:14.640278Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:04:14.640375Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:04:14.674361Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.224:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-09T20:04:14.674424Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.224:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-09T20:04:14.674575Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f49059a365ffbb","current-leader-member-id":"9f49059a365ffbb"}
	{"level":"info","ts":"2024-10-09T20:04:14.676927Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.224:2380"}
	{"level":"info","ts":"2024-10-09T20:04:14.677127Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.224:2380"}
	{"level":"info","ts":"2024-10-09T20:04:14.677169Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"pause-739381","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.224:2380"],"advertise-client-urls":["https://192.168.50.224:2379"]}
	
	
	==> kernel <==
	 20:05:00 up 2 min,  0 users,  load average: 0.94, 0.30, 0.10
	Linux pause-739381 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [53146cbeb3c65ecfbd707b7a89453c7f998d3a3403c1325e4cac2e31c126dd03] <==
	I1009 20:04:35.397961       1 policy_source.go:224] refreshing policies
	I1009 20:04:35.464994       1 shared_informer.go:320] Caches are synced for configmaps
	I1009 20:04:35.465259       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1009 20:04:35.465393       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1009 20:04:35.465598       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 20:04:35.468852       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1009 20:04:35.468953       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1009 20:04:35.469170       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1009 20:04:35.469746       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1009 20:04:35.469781       1 aggregator.go:171] initial CRD sync complete...
	I1009 20:04:35.469787       1 autoregister_controller.go:144] Starting autoregister controller
	I1009 20:04:35.469791       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 20:04:35.469795       1 cache.go:39] Caches are synced for autoregister controller
	I1009 20:04:35.477905       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1009 20:04:35.481426       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1009 20:04:35.488367       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1009 20:04:35.491851       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1009 20:04:36.270258       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 20:04:36.750456       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1009 20:04:36.766667       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1009 20:04:36.798430       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1009 20:04:36.824310       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 20:04:36.830340       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 20:04:38.752173       1 controller.go:615] quota admission added evaluator for: endpoints
	I1009 20:04:39.052109       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501] <==
	
	
	==> kube-controller-manager [6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490] <==
	
	
	==> kube-controller-manager [a4a8e6d67cda35e35e5441b90ba1bc338ebf2d44db13562eb084bb7f968eb730] <==
	I1009 20:04:38.747731       1 shared_informer.go:320] Caches are synced for stateful set
	I1009 20:04:38.747814       1 shared_informer.go:320] Caches are synced for crt configmap
	I1009 20:04:38.747887       1 shared_informer.go:320] Caches are synced for node
	I1009 20:04:38.747956       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1009 20:04:38.747975       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1009 20:04:38.747979       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1009 20:04:38.747984       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1009 20:04:38.748038       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-739381"
	I1009 20:04:38.748083       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1009 20:04:38.774135       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1009 20:04:38.847067       1 shared_informer.go:320] Caches are synced for cronjob
	I1009 20:04:38.848129       1 shared_informer.go:320] Caches are synced for taint
	I1009 20:04:38.848231       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1009 20:04:38.848363       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-739381"
	I1009 20:04:38.848454       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1009 20:04:38.861971       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 20:04:38.887381       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 20:04:38.953696       1 shared_informer.go:320] Caches are synced for namespace
	I1009 20:04:38.997973       1 shared_informer.go:320] Caches are synced for service account
	I1009 20:04:39.397902       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 20:04:39.398026       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1009 20:04:39.400366       1 shared_informer.go:320] Caches are synced for garbage collector
	I1009 20:04:41.398384       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="49.202363ms"
	I1009 20:04:41.422762       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="21.580678ms"
	I1009 20:04:41.422841       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="36.215µs"
	
	
	==> kube-proxy [1304217224b0ff39df5175604fc7f9b46838330c8a14db0394d3ca4581562653] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:04:35.860759       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:04:35.870235       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.224"]
	E1009 20:04:35.870426       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:04:35.904088       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:04:35.904171       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:04:35.904207       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:04:35.906723       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:04:35.907440       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:04:35.907538       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:04:35.909337       1 config.go:199] "Starting service config controller"
	I1009 20:04:35.909403       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:04:35.909450       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:04:35.909536       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:04:35.910034       1 config.go:328] "Starting node config controller"
	I1009 20:04:35.910091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:04:36.009889       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 20:04:36.009904       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:04:36.010242       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02] <==
	I1009 20:04:30.489560       1 server_linux.go:66] "Using iptables proxy"
	
	
	==> kube-scheduler [519a1f7f09a989de14a1e29b0e3e877251a59fc77dd3c97489445234c07e920a] <==
	W1009 20:04:35.298905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:04:35.299045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.299274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 20:04:35.299345       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.299520       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 20:04:35.299586       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.299723       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:04:35.300236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.299916       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 20:04:35.300380       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.300104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 20:04:35.300529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.300217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:04:35.300636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.300873       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:04:35.301574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.301778       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:04:35.301817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.309764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:04:35.315677       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.315647       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:04:35.320756       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:04:35.367981       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:04:35.368035       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1009 20:04:39.861000       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e14673f0224975bd40cf4e7b5fdb163ee42389ddf6042cd03780cca064cb0c2e] <==
	W1009 20:03:15.323529       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:03:15.323557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:15.323572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 20:03:15.323580       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1009 20:03:15.323601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.269185       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:03:16.269238       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.322762       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:03:16.322830       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.334152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:03:16.334319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.383532       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:03:16.383588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.391539       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 20:03:16.391664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.423638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:03:16.423801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.446352       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:03:16.446982       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 20:03:16.480787       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 20:03:16.482373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:03:16.583153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:03:16.583209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1009 20:03:19.112263       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1009 20:04:14.635763       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659300    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/370e60aa81d0f70e1e47c24fa6206480-kubeconfig\") pod \"kube-controller-manager-pause-739381\" (UID: \"370e60aa81d0f70e1e47c24fa6206480\") " pod="kube-system/kube-controller-manager-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659314    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/e08b7c39cc7d517a460612bcd55e5b12-etcd-certs\") pod \"etcd-pause-739381\" (UID: \"e08b7c39cc7d517a460612bcd55e5b12\") " pod="kube-system/etcd-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659329    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/e08b7c39cc7d517a460612bcd55e5b12-etcd-data\") pod \"etcd-pause-739381\" (UID: \"e08b7c39cc7d517a460612bcd55e5b12\") " pod="kube-system/etcd-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659349    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68a6b2df97c53dcc73f338337700b206-ca-certs\") pod \"kube-apiserver-pause-739381\" (UID: \"68a6b2df97c53dcc73f338337700b206\") " pod="kube-system/kube-apiserver-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.659385    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/370e60aa81d0f70e1e47c24fa6206480-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-739381\" (UID: \"370e60aa81d0f70e1e47c24fa6206480\") " pod="kube-system/kube-controller-manager-pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.823136    3111 kubelet_node_status.go:72] "Attempting to register node" node="pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: E1009 20:04:32.824233    3111 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.224:8443: connect: connection refused" node="pause-739381"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.934586    3111 scope.go:117] "RemoveContainer" containerID="e350b39716de4409e31e80d82d61076b2ebe07097840a565b0cf20eb9c6b5501"
	Oct 09 20:04:32 pause-739381 kubelet[3111]: I1009 20:04:32.936598    3111 scope.go:117] "RemoveContainer" containerID="6fc70f0a3e935c384f101477834ada20088b64cd73ff1d273fab4e4f77981490"
	Oct 09 20:04:33 pause-739381 kubelet[3111]: E1009 20:04:33.051176    3111 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-739381?timeout=10s\": dial tcp 192.168.50.224:8443: connect: connection refused" interval="800ms"
	Oct 09 20:04:33 pause-739381 kubelet[3111]: I1009 20:04:33.225454    3111 kubelet_node_status.go:72] "Attempting to register node" node="pause-739381"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.407980    3111 apiserver.go:52] "Watching apiserver"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.446443    3111 kubelet_node_status.go:111] "Node was previously registered" node="pause-739381"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.446677    3111 kubelet_node_status.go:75] "Successfully registered node" node="pause-739381"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.446736    3111 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.447771    3111 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.452993    3111 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.478566    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/78c75730-3c5a-44c7-8091-65b6eb07a4f1-xtables-lock\") pod \"kube-proxy-l9sfg\" (UID: \"78c75730-3c5a-44c7-8091-65b6eb07a4f1\") " pod="kube-system/kube-proxy-l9sfg"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.479163    3111 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/78c75730-3c5a-44c7-8091-65b6eb07a4f1-lib-modules\") pod \"kube-proxy-l9sfg\" (UID: \"78c75730-3c5a-44c7-8091-65b6eb07a4f1\") " pod="kube-system/kube-proxy-l9sfg"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: E1009 20:04:35.628783    3111 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-739381\" already exists" pod="kube-system/kube-apiserver-pause-739381"
	Oct 09 20:04:35 pause-739381 kubelet[3111]: I1009 20:04:35.713867    3111 scope.go:117] "RemoveContainer" containerID="d328bf4c4cc54ec0300c6c9540562d21c20c7076a3ae85c6be8b19e7db2b9f02"
	Oct 09 20:04:42 pause-739381 kubelet[3111]: E1009 20:04:42.528675    3111 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504282527523473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:04:42 pause-739381 kubelet[3111]: E1009 20:04:42.528706    3111 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504282527523473,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:04:52 pause-739381 kubelet[3111]: E1009 20:04:52.533176    3111 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504292532859637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:04:52 pause-739381 kubelet[3111]: E1009 20:04:52.533244    3111 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728504292532859637,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-739381 -n pause-739381
helpers_test.go:261: (dbg) Run:  kubectl --context pause-739381 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (61.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (288.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-169021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-169021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m48.284091812s)

                                                
                                                
-- stdout --
	* [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:06:55.141052   60121 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:06:55.141167   60121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:06:55.141178   60121 out.go:358] Setting ErrFile to fd 2...
	I1009 20:06:55.141185   60121 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:06:55.141380   60121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:06:55.141949   60121 out.go:352] Setting JSON to false
	I1009 20:06:55.142835   60121 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6556,"bootTime":1728497859,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:06:55.142891   60121 start.go:139] virtualization: kvm guest
	I1009 20:06:55.145012   60121 out.go:177] * [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:06:55.146114   60121 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:06:55.146137   60121 notify.go:220] Checking for updates...
	I1009 20:06:55.148387   60121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:06:55.149574   60121 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:06:55.150699   60121 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:06:55.152404   60121 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:06:55.153991   60121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:06:55.155826   60121 config.go:182] Loaded profile config "NoKubernetes-615869": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1009 20:06:55.155984   60121 config.go:182] Loaded profile config "cert-expiration-261596": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:06:55.156138   60121 config.go:182] Loaded profile config "kubernetes-upgrade-790037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:06:55.156249   60121 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:06:55.194260   60121 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 20:06:55.195342   60121 start.go:297] selected driver: kvm2
	I1009 20:06:55.195353   60121 start.go:901] validating driver "kvm2" against <nil>
	I1009 20:06:55.195364   60121 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:06:55.195994   60121 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:06:55.196070   60121 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:06:55.211020   60121 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:06:55.211106   60121 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 20:06:55.211421   60121 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:06:55.211461   60121 cni.go:84] Creating CNI manager for ""
	I1009 20:06:55.211514   60121 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:06:55.211525   60121 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 20:06:55.211601   60121 start.go:340] cluster config:
	{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:06:55.211740   60121 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:06:55.213945   60121 out.go:177] * Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	I1009 20:06:55.214930   60121 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:06:55.214977   60121 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:06:55.214986   60121 cache.go:56] Caching tarball of preloaded images
	I1009 20:06:55.215093   60121 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:06:55.215107   60121 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:06:55.215210   60121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:06:55.215231   60121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json: {Name:mkde53f0265f7179a64aed2c4d5f0e984ec87cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:06:55.215396   60121 start.go:360] acquireMachinesLock for old-k8s-version-169021: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:07:13.603608   60121 start.go:364] duration metric: took 18.388188133s to acquireMachinesLock for "old-k8s-version-169021"
	I1009 20:07:13.603689   60121 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:07:13.603807   60121 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 20:07:13.605815   60121 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 20:07:13.606004   60121 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:07:13.606044   60121 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:07:13.626046   60121 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I1009 20:07:13.626535   60121 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:07:13.627187   60121 main.go:141] libmachine: Using API Version  1
	I1009 20:07:13.627214   60121 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:07:13.627589   60121 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:07:13.627801   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:07:13.627929   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:07:13.628087   60121 start.go:159] libmachine.API.Create for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:07:13.628116   60121 client.go:168] LocalClient.Create starting
	I1009 20:07:13.628147   60121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 20:07:13.628187   60121 main.go:141] libmachine: Decoding PEM data...
	I1009 20:07:13.628209   60121 main.go:141] libmachine: Parsing certificate...
	I1009 20:07:13.628273   60121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 20:07:13.628298   60121 main.go:141] libmachine: Decoding PEM data...
	I1009 20:07:13.628317   60121 main.go:141] libmachine: Parsing certificate...
	I1009 20:07:13.628340   60121 main.go:141] libmachine: Running pre-create checks...
	I1009 20:07:13.628353   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .PreCreateCheck
	I1009 20:07:13.628679   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:07:13.629118   60121 main.go:141] libmachine: Creating machine...
	I1009 20:07:13.629135   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .Create
	I1009 20:07:13.629273   60121 main.go:141] libmachine: (old-k8s-version-169021) Creating KVM machine...
	I1009 20:07:13.630450   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found existing default KVM network
	I1009 20:07:13.631543   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:13.631379   60227 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d6:14:56} reservation:<nil>}
	I1009 20:07:13.632615   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:13.632527   60227 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:1c:fa:12} reservation:<nil>}
	I1009 20:07:13.633673   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:13.633592   60227 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000380900}
	I1009 20:07:13.633691   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | created network xml: 
	I1009 20:07:13.633704   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | <network>
	I1009 20:07:13.633712   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |   <name>mk-old-k8s-version-169021</name>
	I1009 20:07:13.633727   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |   <dns enable='no'/>
	I1009 20:07:13.633735   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |   
	I1009 20:07:13.633744   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1009 20:07:13.633760   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |     <dhcp>
	I1009 20:07:13.633790   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1009 20:07:13.633814   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |     </dhcp>
	I1009 20:07:13.633827   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |   </ip>
	I1009 20:07:13.633836   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG |   
	I1009 20:07:13.633849   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | </network>
	I1009 20:07:13.633859   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | 
	I1009 20:07:13.639075   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | trying to create private KVM network mk-old-k8s-version-169021 192.168.61.0/24...
	I1009 20:07:13.707451   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | private KVM network mk-old-k8s-version-169021 192.168.61.0/24 created
	I1009 20:07:13.707496   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:13.707417   60227 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:07:13.707530   60121 main.go:141] libmachine: (old-k8s-version-169021) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021 ...
	I1009 20:07:13.707549   60121 main.go:141] libmachine: (old-k8s-version-169021) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 20:07:13.707736   60121 main.go:141] libmachine: (old-k8s-version-169021) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 20:07:13.947548   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:13.947416   60227 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa...
	I1009 20:07:14.134981   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:14.134819   60227 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/old-k8s-version-169021.rawdisk...
	I1009 20:07:14.135019   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Writing magic tar header
	I1009 20:07:14.135038   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Writing SSH key tar header
	I1009 20:07:14.135051   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:14.134937   60227 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021 ...
	I1009 20:07:14.135088   60121 main.go:141] libmachine: (old-k8s-version-169021) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021 (perms=drwx------)
	I1009 20:07:14.135102   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021
	I1009 20:07:14.135113   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 20:07:14.135133   60121 main.go:141] libmachine: (old-k8s-version-169021) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 20:07:14.135166   60121 main.go:141] libmachine: (old-k8s-version-169021) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 20:07:14.135180   60121 main.go:141] libmachine: (old-k8s-version-169021) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 20:07:14.135191   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:07:14.135202   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 20:07:14.135214   60121 main.go:141] libmachine: (old-k8s-version-169021) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 20:07:14.135228   60121 main.go:141] libmachine: (old-k8s-version-169021) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 20:07:14.135238   60121 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:07:14.135251   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 20:07:14.135268   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Checking permissions on dir: /home/jenkins
	I1009 20:07:14.135312   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Checking permissions on dir: /home
	I1009 20:07:14.135338   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Skipping /home - not owner
	I1009 20:07:14.136428   60121 main.go:141] libmachine: (old-k8s-version-169021) define libvirt domain using xml: 
	I1009 20:07:14.136448   60121 main.go:141] libmachine: (old-k8s-version-169021) <domain type='kvm'>
	I1009 20:07:14.136459   60121 main.go:141] libmachine: (old-k8s-version-169021)   <name>old-k8s-version-169021</name>
	I1009 20:07:14.136471   60121 main.go:141] libmachine: (old-k8s-version-169021)   <memory unit='MiB'>2200</memory>
	I1009 20:07:14.136496   60121 main.go:141] libmachine: (old-k8s-version-169021)   <vcpu>2</vcpu>
	I1009 20:07:14.136515   60121 main.go:141] libmachine: (old-k8s-version-169021)   <features>
	I1009 20:07:14.136535   60121 main.go:141] libmachine: (old-k8s-version-169021)     <acpi/>
	I1009 20:07:14.136543   60121 main.go:141] libmachine: (old-k8s-version-169021)     <apic/>
	I1009 20:07:14.136548   60121 main.go:141] libmachine: (old-k8s-version-169021)     <pae/>
	I1009 20:07:14.136555   60121 main.go:141] libmachine: (old-k8s-version-169021)     
	I1009 20:07:14.136560   60121 main.go:141] libmachine: (old-k8s-version-169021)   </features>
	I1009 20:07:14.136566   60121 main.go:141] libmachine: (old-k8s-version-169021)   <cpu mode='host-passthrough'>
	I1009 20:07:14.136571   60121 main.go:141] libmachine: (old-k8s-version-169021)   
	I1009 20:07:14.136578   60121 main.go:141] libmachine: (old-k8s-version-169021)   </cpu>
	I1009 20:07:14.136583   60121 main.go:141] libmachine: (old-k8s-version-169021)   <os>
	I1009 20:07:14.136591   60121 main.go:141] libmachine: (old-k8s-version-169021)     <type>hvm</type>
	I1009 20:07:14.136619   60121 main.go:141] libmachine: (old-k8s-version-169021)     <boot dev='cdrom'/>
	I1009 20:07:14.136643   60121 main.go:141] libmachine: (old-k8s-version-169021)     <boot dev='hd'/>
	I1009 20:07:14.136656   60121 main.go:141] libmachine: (old-k8s-version-169021)     <bootmenu enable='no'/>
	I1009 20:07:14.136670   60121 main.go:141] libmachine: (old-k8s-version-169021)   </os>
	I1009 20:07:14.136682   60121 main.go:141] libmachine: (old-k8s-version-169021)   <devices>
	I1009 20:07:14.136693   60121 main.go:141] libmachine: (old-k8s-version-169021)     <disk type='file' device='cdrom'>
	I1009 20:07:14.136728   60121 main.go:141] libmachine: (old-k8s-version-169021)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/boot2docker.iso'/>
	I1009 20:07:14.136744   60121 main.go:141] libmachine: (old-k8s-version-169021)       <target dev='hdc' bus='scsi'/>
	I1009 20:07:14.136758   60121 main.go:141] libmachine: (old-k8s-version-169021)       <readonly/>
	I1009 20:07:14.136768   60121 main.go:141] libmachine: (old-k8s-version-169021)     </disk>
	I1009 20:07:14.136780   60121 main.go:141] libmachine: (old-k8s-version-169021)     <disk type='file' device='disk'>
	I1009 20:07:14.136792   60121 main.go:141] libmachine: (old-k8s-version-169021)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 20:07:14.136808   60121 main.go:141] libmachine: (old-k8s-version-169021)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/old-k8s-version-169021.rawdisk'/>
	I1009 20:07:14.136823   60121 main.go:141] libmachine: (old-k8s-version-169021)       <target dev='hda' bus='virtio'/>
	I1009 20:07:14.136834   60121 main.go:141] libmachine: (old-k8s-version-169021)     </disk>
	I1009 20:07:14.136843   60121 main.go:141] libmachine: (old-k8s-version-169021)     <interface type='network'>
	I1009 20:07:14.136855   60121 main.go:141] libmachine: (old-k8s-version-169021)       <source network='mk-old-k8s-version-169021'/>
	I1009 20:07:14.136865   60121 main.go:141] libmachine: (old-k8s-version-169021)       <model type='virtio'/>
	I1009 20:07:14.136885   60121 main.go:141] libmachine: (old-k8s-version-169021)     </interface>
	I1009 20:07:14.136901   60121 main.go:141] libmachine: (old-k8s-version-169021)     <interface type='network'>
	I1009 20:07:14.136912   60121 main.go:141] libmachine: (old-k8s-version-169021)       <source network='default'/>
	I1009 20:07:14.136923   60121 main.go:141] libmachine: (old-k8s-version-169021)       <model type='virtio'/>
	I1009 20:07:14.136934   60121 main.go:141] libmachine: (old-k8s-version-169021)     </interface>
	I1009 20:07:14.136944   60121 main.go:141] libmachine: (old-k8s-version-169021)     <serial type='pty'>
	I1009 20:07:14.136951   60121 main.go:141] libmachine: (old-k8s-version-169021)       <target port='0'/>
	I1009 20:07:14.136960   60121 main.go:141] libmachine: (old-k8s-version-169021)     </serial>
	I1009 20:07:14.136980   60121 main.go:141] libmachine: (old-k8s-version-169021)     <console type='pty'>
	I1009 20:07:14.137000   60121 main.go:141] libmachine: (old-k8s-version-169021)       <target type='serial' port='0'/>
	I1009 20:07:14.137012   60121 main.go:141] libmachine: (old-k8s-version-169021)     </console>
	I1009 20:07:14.137022   60121 main.go:141] libmachine: (old-k8s-version-169021)     <rng model='virtio'>
	I1009 20:07:14.137032   60121 main.go:141] libmachine: (old-k8s-version-169021)       <backend model='random'>/dev/random</backend>
	I1009 20:07:14.137042   60121 main.go:141] libmachine: (old-k8s-version-169021)     </rng>
	I1009 20:07:14.137050   60121 main.go:141] libmachine: (old-k8s-version-169021)     
	I1009 20:07:14.137058   60121 main.go:141] libmachine: (old-k8s-version-169021)     
	I1009 20:07:14.137072   60121 main.go:141] libmachine: (old-k8s-version-169021)   </devices>
	I1009 20:07:14.137087   60121 main.go:141] libmachine: (old-k8s-version-169021) </domain>
	I1009 20:07:14.137104   60121 main.go:141] libmachine: (old-k8s-version-169021) 
	I1009 20:07:14.141998   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:ec:7e:96 in network default
	I1009 20:07:14.142767   60121 main.go:141] libmachine: (old-k8s-version-169021) Ensuring networks are active...
	I1009 20:07:14.142787   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:14.143549   60121 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network default is active
	I1009 20:07:14.143888   60121 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network mk-old-k8s-version-169021 is active
	I1009 20:07:14.144461   60121 main.go:141] libmachine: (old-k8s-version-169021) Getting domain xml...
	I1009 20:07:14.145314   60121 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:07:15.393188   60121 main.go:141] libmachine: (old-k8s-version-169021) Waiting to get IP...
	I1009 20:07:15.394108   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:15.394690   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:15.394740   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:15.394674   60227 retry.go:31] will retry after 270.250516ms: waiting for machine to come up
	I1009 20:07:15.667100   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:15.667635   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:15.667664   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:15.667592   60227 retry.go:31] will retry after 362.697499ms: waiting for machine to come up
	I1009 20:07:16.032052   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:16.032539   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:16.032567   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:16.032494   60227 retry.go:31] will retry after 404.752591ms: waiting for machine to come up
	I1009 20:07:16.439029   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:16.439440   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:16.439466   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:16.439412   60227 retry.go:31] will retry after 378.555711ms: waiting for machine to come up
	I1009 20:07:16.820052   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:16.820587   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:16.820618   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:16.820543   60227 retry.go:31] will retry after 605.456729ms: waiting for machine to come up
	I1009 20:07:17.427154   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:17.427579   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:17.427601   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:17.427543   60227 retry.go:31] will retry after 768.664155ms: waiting for machine to come up
	I1009 20:07:18.197519   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:18.197997   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:18.198020   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:18.197936   60227 retry.go:31] will retry after 1.014145249s: waiting for machine to come up
	I1009 20:07:19.213249   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:19.213761   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:19.213785   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:19.213720   60227 retry.go:31] will retry after 1.48464444s: waiting for machine to come up
	I1009 20:07:20.700184   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:20.700670   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:20.700695   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:20.700629   60227 retry.go:31] will retry after 1.312144799s: waiting for machine to come up
	I1009 20:07:22.015031   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:22.015576   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:22.015606   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:22.015516   60227 retry.go:31] will retry after 1.559733816s: waiting for machine to come up
	I1009 20:07:23.577564   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:23.578102   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:23.578122   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:23.578048   60227 retry.go:31] will retry after 2.710257878s: waiting for machine to come up
	I1009 20:07:26.290334   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:26.290858   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:26.290886   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:26.290809   60227 retry.go:31] will retry after 2.208228797s: waiting for machine to come up
	I1009 20:07:28.500110   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:28.500588   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:28.500622   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:28.500547   60227 retry.go:31] will retry after 3.52629121s: waiting for machine to come up
	I1009 20:07:32.525718   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:32.526114   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:07:32.526138   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:07:32.526081   60227 retry.go:31] will retry after 5.314550016s: waiting for machine to come up
	I1009 20:07:37.845628   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:37.846097   60121 main.go:141] libmachine: (old-k8s-version-169021) Found IP for machine: 192.168.61.119
	I1009 20:07:37.846121   60121 main.go:141] libmachine: (old-k8s-version-169021) Reserving static IP address...
	I1009 20:07:37.846136   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has current primary IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:37.846460   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"} in network mk-old-k8s-version-169021
	I1009 20:07:37.918667   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Getting to WaitForSSH function...
	I1009 20:07:37.918713   60121 main.go:141] libmachine: (old-k8s-version-169021) Reserved static IP address: 192.168.61.119
	I1009 20:07:37.918728   60121 main.go:141] libmachine: (old-k8s-version-169021) Waiting for SSH to be available...
	I1009 20:07:37.921421   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:37.921844   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:minikube Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:37.921881   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:37.922004   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH client type: external
	I1009 20:07:37.922389   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa (-rw-------)
	I1009 20:07:37.922660   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:07:37.922682   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | About to run SSH command:
	I1009 20:07:37.922695   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | exit 0
	I1009 20:07:38.046574   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | SSH cmd err, output: <nil>: 
	I1009 20:07:38.046853   60121 main.go:141] libmachine: (old-k8s-version-169021) KVM machine creation complete!
	I1009 20:07:38.047162   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:07:38.047749   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:07:38.047919   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:07:38.048067   60121 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 20:07:38.048086   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetState
	I1009 20:07:38.049278   60121 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 20:07:38.049290   60121 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 20:07:38.049297   60121 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 20:07:38.049304   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:38.051690   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.052005   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:38.052033   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.052146   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:38.052278   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.052421   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.052533   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:38.052652   60121 main.go:141] libmachine: Using SSH client type: native
	I1009 20:07:38.052843   60121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:07:38.052864   60121 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 20:07:38.158615   60121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:07:38.158639   60121 main.go:141] libmachine: Detecting the provisioner...
	I1009 20:07:38.158659   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:38.162054   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.162474   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:38.162510   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.162648   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:38.162840   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.163008   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.163165   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:38.163358   60121 main.go:141] libmachine: Using SSH client type: native
	I1009 20:07:38.163571   60121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:07:38.163587   60121 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 20:07:38.271498   60121 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 20:07:38.271561   60121 main.go:141] libmachine: found compatible host: buildroot
	I1009 20:07:38.271568   60121 main.go:141] libmachine: Provisioning with buildroot...
	I1009 20:07:38.271575   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:07:38.271813   60121 buildroot.go:166] provisioning hostname "old-k8s-version-169021"
	I1009 20:07:38.271841   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:07:38.272002   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:38.274524   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.274823   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:38.274849   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.274959   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:38.275127   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.275265   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.275390   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:38.275518   60121 main.go:141] libmachine: Using SSH client type: native
	I1009 20:07:38.275678   60121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:07:38.275689   60121 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169021 && echo "old-k8s-version-169021" | sudo tee /etc/hostname
	I1009 20:07:38.401471   60121 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169021
	
	I1009 20:07:38.401498   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:38.404045   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.404412   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:38.404446   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.404591   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:38.404766   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.404884   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.404998   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:38.405130   60121 main.go:141] libmachine: Using SSH client type: native
	I1009 20:07:38.405310   60121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:07:38.405346   60121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:07:38.524869   60121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:07:38.524902   60121 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:07:38.524934   60121 buildroot.go:174] setting up certificates
	I1009 20:07:38.524949   60121 provision.go:84] configureAuth start
	I1009 20:07:38.524959   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:07:38.525248   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:07:38.527700   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.527993   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:38.528021   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.528137   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:38.529911   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.530186   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:38.530216   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.530473   60121 provision.go:143] copyHostCerts
	I1009 20:07:38.530546   60121 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:07:38.530560   60121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:07:38.530625   60121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:07:38.530728   60121 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:07:38.530740   60121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:07:38.530768   60121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:07:38.530837   60121 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:07:38.530848   60121 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:07:38.530872   60121 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:07:38.530930   60121 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169021 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-169021]
	I1009 20:07:38.804422   60121 provision.go:177] copyRemoteCerts
	I1009 20:07:38.804482   60121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:07:38.804503   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:38.807120   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.807420   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:38.807452   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.807593   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:38.807752   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.807917   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:38.808056   60121 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:07:38.893862   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:07:38.918172   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:07:38.940942   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:07:38.963273   60121 provision.go:87] duration metric: took 438.311528ms to configureAuth
	I1009 20:07:38.963300   60121 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:07:38.963446   60121 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:07:38.963524   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:38.965974   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.966339   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:38.966372   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:38.966497   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:38.966657   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.966803   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:38.966934   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:38.967117   60121 main.go:141] libmachine: Using SSH client type: native
	I1009 20:07:38.967364   60121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:07:38.967392   60121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:07:39.192033   60121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:07:39.192059   60121 main.go:141] libmachine: Checking connection to Docker...
	I1009 20:07:39.192093   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetURL
	I1009 20:07:39.193154   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using libvirt version 6000000
	I1009 20:07:39.195303   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.195583   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:39.195612   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.195721   60121 main.go:141] libmachine: Docker is up and running!
	I1009 20:07:39.195740   60121 main.go:141] libmachine: Reticulating splines...
	I1009 20:07:39.195748   60121 client.go:171] duration metric: took 25.567622496s to LocalClient.Create
	I1009 20:07:39.195773   60121 start.go:167] duration metric: took 25.567686585s to libmachine.API.Create "old-k8s-version-169021"
	I1009 20:07:39.195782   60121 start.go:293] postStartSetup for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:07:39.195790   60121 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:07:39.195806   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:07:39.196014   60121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:07:39.196054   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:39.198136   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.198401   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:39.198425   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.198496   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:39.198657   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:39.198803   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:39.198907   60121 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:07:39.281480   60121 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:07:39.285646   60121 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:07:39.285668   60121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:07:39.285730   60121 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:07:39.285826   60121 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:07:39.285948   60121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:07:39.294690   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:07:39.317661   60121 start.go:296] duration metric: took 121.868674ms for postStartSetup
	I1009 20:07:39.317702   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:07:39.318266   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:07:39.320900   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.321240   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:39.321264   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.321494   60121 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:07:39.321712   60121 start.go:128] duration metric: took 25.71789269s to createHost
	I1009 20:07:39.321737   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:39.323819   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.324106   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:39.324130   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.324272   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:39.324435   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:39.324587   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:39.324704   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:39.324870   60121 main.go:141] libmachine: Using SSH client type: native
	I1009 20:07:39.325031   60121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:07:39.325046   60121 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:07:39.436031   60121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728504459.395338509
	
	I1009 20:07:39.436054   60121 fix.go:216] guest clock: 1728504459.395338509
	I1009 20:07:39.436061   60121 fix.go:229] Guest: 2024-10-09 20:07:39.395338509 +0000 UTC Remote: 2024-10-09 20:07:39.321724526 +0000 UTC m=+44.219433879 (delta=73.613983ms)
	I1009 20:07:39.436080   60121 fix.go:200] guest clock delta is within tolerance: 73.613983ms
	I1009 20:07:39.436084   60121 start.go:83] releasing machines lock for "old-k8s-version-169021", held for 25.832429028s
	I1009 20:07:39.436110   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:07:39.436365   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:07:39.438827   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.439181   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:39.439353   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.439373   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:07:39.439832   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:07:39.440013   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:07:39.440113   60121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:07:39.440160   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:39.440203   60121 ssh_runner.go:195] Run: cat /version.json
	I1009 20:07:39.440226   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:07:39.442760   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.443027   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.443154   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:39.443181   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.443298   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:39.443465   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:39.443547   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:39.443578   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:39.443623   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:39.443769   60121 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:07:39.443828   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:07:39.443963   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:07:39.444092   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:07:39.444210   60121 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:07:39.549791   60121 ssh_runner.go:195] Run: systemctl --version
	I1009 20:07:39.555528   60121 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:07:39.718032   60121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:07:39.725329   60121 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:07:39.725385   60121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:07:39.740823   60121 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:07:39.740842   60121 start.go:495] detecting cgroup driver to use...
	I1009 20:07:39.740889   60121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:07:39.756723   60121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:07:39.770472   60121 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:07:39.770530   60121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:07:39.784226   60121 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:07:39.798610   60121 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:07:39.907872   60121 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:07:40.051491   60121 docker.go:233] disabling docker service ...
	I1009 20:07:40.051580   60121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:07:40.065835   60121 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:07:40.078596   60121 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:07:40.196105   60121 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:07:40.311471   60121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:07:40.324856   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:07:40.342595   60121 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:07:40.342648   60121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:07:40.352174   60121 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:07:40.352227   60121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:07:40.361820   60121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:07:40.371382   60121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:07:40.380910   60121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:07:40.390881   60121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:07:40.399216   60121 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:07:40.399293   60121 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:07:40.411455   60121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:07:40.420290   60121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:07:40.551702   60121 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:07:40.655961   60121 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:07:40.656027   60121 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:07:40.660935   60121 start.go:563] Will wait 60s for crictl version
	I1009 20:07:40.660987   60121 ssh_runner.go:195] Run: which crictl
	I1009 20:07:40.664736   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:07:40.712786   60121 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:07:40.712861   60121 ssh_runner.go:195] Run: crio --version
	I1009 20:07:40.752433   60121 ssh_runner.go:195] Run: crio --version
	I1009 20:07:40.784927   60121 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:07:40.786098   60121 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:07:40.789051   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:40.789384   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:07:28 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:07:40.789411   60121 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:07:40.789600   60121 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:07:40.793911   60121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:07:40.806435   60121 kubeadm.go:883] updating cluster {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:07:40.806540   60121 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:07:40.806589   60121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:07:40.839034   60121 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:07:40.839137   60121 ssh_runner.go:195] Run: which lz4
	I1009 20:07:40.843110   60121 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:07:40.847468   60121 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:07:40.847504   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:07:42.500717   60121 crio.go:462] duration metric: took 1.657646543s to copy over tarball
	I1009 20:07:42.500837   60121 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:07:45.043988   60121 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.54311374s)
	I1009 20:07:45.044028   60121 crio.go:469] duration metric: took 2.543272242s to extract the tarball
	I1009 20:07:45.044039   60121 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:07:45.087916   60121 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:07:45.131602   60121 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:07:45.131631   60121 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:07:45.131723   60121 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:07:45.131758   60121 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:07:45.131781   60121 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:07:45.131770   60121 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:07:45.131756   60121 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:07:45.131948   60121 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:07:45.131717   60121 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:07:45.131727   60121 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:07:45.133202   60121 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:07:45.133245   60121 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:07:45.133233   60121 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:07:45.133336   60121 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:07:45.135231   60121 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:07:45.135298   60121 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:07:45.135311   60121 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:07:45.135558   60121 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:07:45.292829   60121 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:07:45.300141   60121 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:07:45.300576   60121 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:07:45.302692   60121 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:07:45.328583   60121 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:07:45.341858   60121 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:07:45.360661   60121 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:07:45.405064   60121 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:07:45.405123   60121 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:07:45.405178   60121 ssh_runner.go:195] Run: which crictl
	I1009 20:07:45.416743   60121 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:07:45.416791   60121 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:07:45.416842   60121 ssh_runner.go:195] Run: which crictl
	I1009 20:07:45.444858   60121 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:07:45.444909   60121 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:07:45.444858   60121 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:07:45.444998   60121 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:07:45.444965   60121 ssh_runner.go:195] Run: which crictl
	I1009 20:07:45.445055   60121 ssh_runner.go:195] Run: which crictl
	I1009 20:07:45.474267   60121 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:07:45.474312   60121 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:07:45.474357   60121 ssh_runner.go:195] Run: which crictl
	I1009 20:07:45.474360   60121 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:07:45.474398   60121 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:07:45.474455   60121 ssh_runner.go:195] Run: which crictl
	I1009 20:07:45.492494   60121 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:07:45.492538   60121 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:07:45.492560   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:07:45.492589   60121 ssh_runner.go:195] Run: which crictl
	I1009 20:07:45.492604   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:07:45.492663   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:07:45.492691   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:07:45.492700   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:07:45.492758   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:07:45.604223   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:07:45.604373   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:07:45.607246   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:07:45.633165   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:07:45.637553   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:07:45.645265   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:07:45.645297   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:07:45.733137   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:07:45.746752   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:07:45.750114   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:07:45.776532   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:07:45.794519   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:07:45.826318   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:07:45.826341   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:07:45.896025   60121 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:07:45.896094   60121 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:07:45.898283   60121 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:07:45.927602   60121 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:07:45.927634   60121 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:07:45.929345   60121 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:07:45.942480   60121 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:07:45.972526   60121 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:07:46.334591   60121 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:07:46.471999   60121 cache_images.go:92] duration metric: took 1.340349886s to LoadCachedImages
	W1009 20:07:46.472110   60121 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1009 20:07:46.472128   60121 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1009 20:07:46.472254   60121 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-169021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:07:46.472350   60121 ssh_runner.go:195] Run: crio config
	I1009 20:07:46.518384   60121 cni.go:84] Creating CNI manager for ""
	I1009 20:07:46.518409   60121 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:07:46.518426   60121 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:07:46.518455   60121 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169021 NodeName:old-k8s-version-169021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:07:46.518622   60121 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-169021"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:07:46.518697   60121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:07:46.530065   60121 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:07:46.530144   60121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:07:46.540380   60121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 20:07:46.556438   60121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:07:46.573023   60121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:07:46.589830   60121 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1009 20:07:46.593758   60121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:07:46.605737   60121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:07:46.716220   60121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:07:46.732447   60121 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021 for IP: 192.168.61.119
	I1009 20:07:46.732480   60121 certs.go:194] generating shared ca certs ...
	I1009 20:07:46.732501   60121 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:07:46.732684   60121 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:07:46.732742   60121 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:07:46.732758   60121 certs.go:256] generating profile certs ...
	I1009 20:07:46.732825   60121 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key
	I1009 20:07:46.732849   60121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt with IP's: []
	I1009 20:07:46.931894   60121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt ...
	I1009 20:07:46.931930   60121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: {Name:mkf3c48017c34a9ba0fb8498e84f0eeccec5a3c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:07:46.932111   60121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key ...
	I1009 20:07:46.932128   60121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key: {Name:mk59b32228374742e8f2fe85fe15b15d2e3caed8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:07:46.932218   60121 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192
	I1009 20:07:46.932235   60121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt.f77cd192 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.119]
	I1009 20:07:46.994248   60121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt.f77cd192 ...
	I1009 20:07:46.994283   60121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt.f77cd192: {Name:mk6b474b6d29814eab88e297c7c6ac01b581528e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:07:46.994440   60121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192 ...
	I1009 20:07:46.994459   60121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192: {Name:mk6c35e0a92daed9b1fc0cb7966e0142a8060c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:07:46.994565   60121 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt.f77cd192 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt
	I1009 20:07:46.994670   60121 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key
	I1009 20:07:46.994733   60121 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key
	I1009 20:07:46.994749   60121 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt with IP's: []
	I1009 20:07:47.079607   60121 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt ...
	I1009 20:07:47.079639   60121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt: {Name:mk0774425a5e9c6ec8f4529586a644cfd3b6e950 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:07:47.079811   60121 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key ...
	I1009 20:07:47.079827   60121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key: {Name:mk24aacc243d50c97169958ebdc3521e4f65b096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:07:47.079992   60121 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:07:47.080025   60121 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:07:47.080033   60121 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:07:47.080055   60121 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:07:47.080076   60121 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:07:47.080112   60121 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:07:47.080146   60121 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:07:47.080721   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:07:47.107141   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:07:47.132928   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:07:47.158689   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:07:47.182787   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:07:47.206468   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:07:47.231085   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:07:47.258320   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:07:47.282946   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:07:47.306542   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:07:47.330567   60121 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:07:47.356141   60121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:07:47.372423   60121 ssh_runner.go:195] Run: openssl version
	I1009 20:07:47.378305   60121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:07:47.388658   60121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:07:47.393101   60121 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:07:47.393161   60121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:07:47.398888   60121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:07:47.410483   60121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:07:47.422303   60121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:07:47.427082   60121 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:07:47.427150   60121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:07:47.433005   60121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:07:47.443902   60121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:07:47.454839   60121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:07:47.460105   60121 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:07:47.460162   60121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:07:47.466240   60121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:07:47.477785   60121 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:07:47.482082   60121 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:07:47.482131   60121 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:07:47.482193   60121 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:07:47.482231   60121 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:07:47.526455   60121 cri.go:89] found id: ""
	I1009 20:07:47.526533   60121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:07:47.536672   60121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:07:47.547260   60121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:07:47.559140   60121 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:07:47.559167   60121 kubeadm.go:157] found existing configuration files:
	
	I1009 20:07:47.559220   60121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:07:47.575961   60121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:07:47.576019   60121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:07:47.585077   60121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:07:47.594290   60121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:07:47.594336   60121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:07:47.607328   60121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:07:47.621535   60121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:07:47.621588   60121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:07:47.630850   60121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:07:47.649960   60121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:07:47.650012   60121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:07:47.662632   60121 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:07:47.925374   60121 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:09:45.541560   60121 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:09:45.541672   60121 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:09:45.543667   60121 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:09:45.543728   60121 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:09:45.543860   60121 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:09:45.544008   60121 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:09:45.544153   60121 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:09:45.544219   60121 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:09:45.545554   60121 out.go:235]   - Generating certificates and keys ...
	I1009 20:09:45.545641   60121 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:09:45.545704   60121 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:09:45.545790   60121 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 20:09:45.545864   60121 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 20:09:45.545953   60121 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 20:09:45.546019   60121 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 20:09:45.546085   60121 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 20:09:45.546239   60121 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-169021] and IPs [192.168.61.119 127.0.0.1 ::1]
	I1009 20:09:45.546290   60121 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 20:09:45.546400   60121 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-169021] and IPs [192.168.61.119 127.0.0.1 ::1]
	I1009 20:09:45.546488   60121 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 20:09:45.546599   60121 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 20:09:45.546673   60121 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 20:09:45.546749   60121 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:09:45.546825   60121 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:09:45.546916   60121 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:09:45.547012   60121 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:09:45.547108   60121 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:09:45.547249   60121 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:09:45.547364   60121 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:09:45.547422   60121 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:09:45.547546   60121 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:09:45.548895   60121 out.go:235]   - Booting up control plane ...
	I1009 20:09:45.548986   60121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:09:45.549070   60121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:09:45.549159   60121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:09:45.549252   60121 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:09:45.549432   60121 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:09:45.549475   60121 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:09:45.549550   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:09:45.549748   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:09:45.549831   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:09:45.550092   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:09:45.550173   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:09:45.550399   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:09:45.550482   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:09:45.550736   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:09:45.550847   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:09:45.551125   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:09:45.551136   60121 kubeadm.go:310] 
	I1009 20:09:45.551202   60121 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:09:45.551264   60121 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:09:45.551273   60121 kubeadm.go:310] 
	I1009 20:09:45.551302   60121 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:09:45.551333   60121 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:09:45.551419   60121 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:09:45.551425   60121 kubeadm.go:310] 
	I1009 20:09:45.551509   60121 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:09:45.551540   60121 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:09:45.551575   60121 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:09:45.551585   60121 kubeadm.go:310] 
	I1009 20:09:45.551677   60121 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:09:45.551750   60121 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:09:45.551756   60121 kubeadm.go:310] 
	I1009 20:09:45.551851   60121 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:09:45.551944   60121 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:09:45.552046   60121 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:09:45.552149   60121 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:09:45.552172   60121 kubeadm.go:310] 
	W1009 20:09:45.552274   60121 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-169021] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-169021] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-169021] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-169021] and IPs [192.168.61.119 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:09:45.552320   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:09:46.317526   60121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:09:46.331422   60121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:09:46.341786   60121 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:09:46.341805   60121 kubeadm.go:157] found existing configuration files:
	
	I1009 20:09:46.341849   60121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:09:46.351654   60121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:09:46.351722   60121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:09:46.361386   60121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:09:46.370949   60121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:09:46.371007   60121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:09:46.380717   60121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:09:46.390070   60121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:09:46.390119   60121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:09:46.399564   60121 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:09:46.408605   60121 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:09:46.408648   60121 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:09:46.418014   60121 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:09:46.645876   60121 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:11:42.752209   60121 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:11:42.752349   60121 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:11:42.754154   60121 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:11:42.754223   60121 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:11:42.754313   60121 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:11:42.754423   60121 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:11:42.754540   60121 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:11:42.754626   60121 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:11:42.756382   60121 out.go:235]   - Generating certificates and keys ...
	I1009 20:11:42.756473   60121 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:11:42.756548   60121 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:11:42.756619   60121 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:11:42.756675   60121 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:11:42.756747   60121 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:11:42.756804   60121 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:11:42.756883   60121 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:11:42.756975   60121 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:11:42.757073   60121 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:11:42.757163   60121 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:11:42.757199   60121 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:11:42.757246   60121 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:11:42.757304   60121 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:11:42.757352   60121 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:11:42.757405   60121 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:11:42.757453   60121 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:11:42.757542   60121 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:11:42.757615   60121 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:11:42.757656   60121 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:11:42.757715   60121 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:11:42.759161   60121 out.go:235]   - Booting up control plane ...
	I1009 20:11:42.759234   60121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:11:42.759318   60121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:11:42.759380   60121 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:11:42.759447   60121 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:11:42.759575   60121 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:11:42.759618   60121 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:11:42.759679   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:11:42.759853   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:11:42.759919   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:11:42.760090   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:11:42.760156   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:11:42.760318   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:11:42.760376   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:11:42.760533   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:11:42.760590   60121 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:11:42.760751   60121 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:11:42.760758   60121 kubeadm.go:310] 
	I1009 20:11:42.760791   60121 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:11:42.760856   60121 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:11:42.760875   60121 kubeadm.go:310] 
	I1009 20:11:42.760992   60121 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:11:42.761025   60121 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:11:42.761111   60121 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:11:42.761117   60121 kubeadm.go:310] 
	I1009 20:11:42.761209   60121 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:11:42.761247   60121 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:11:42.761285   60121 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:11:42.761291   60121 kubeadm.go:310] 
	I1009 20:11:42.761401   60121 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:11:42.761523   60121 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:11:42.761536   60121 kubeadm.go:310] 
	I1009 20:11:42.761709   60121 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:11:42.761789   60121 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:11:42.761854   60121 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:11:42.761918   60121 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:11:42.761927   60121 kubeadm.go:310] 
	I1009 20:11:42.761975   60121 kubeadm.go:394] duration metric: took 3m55.279847683s to StartCluster
	I1009 20:11:42.762016   60121 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:11:42.762061   60121 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:11:42.821784   60121 cri.go:89] found id: ""
	I1009 20:11:42.821813   60121 logs.go:282] 0 containers: []
	W1009 20:11:42.821823   60121 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:11:42.821830   60121 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:11:42.821896   60121 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:11:42.862795   60121 cri.go:89] found id: ""
	I1009 20:11:42.862819   60121 logs.go:282] 0 containers: []
	W1009 20:11:42.862826   60121 logs.go:284] No container was found matching "etcd"
	I1009 20:11:42.862831   60121 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:11:42.862879   60121 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:11:42.897140   60121 cri.go:89] found id: ""
	I1009 20:11:42.897168   60121 logs.go:282] 0 containers: []
	W1009 20:11:42.897175   60121 logs.go:284] No container was found matching "coredns"
	I1009 20:11:42.897180   60121 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:11:42.897226   60121 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:11:42.929459   60121 cri.go:89] found id: ""
	I1009 20:11:42.929488   60121 logs.go:282] 0 containers: []
	W1009 20:11:42.929497   60121 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:11:42.929502   60121 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:11:42.929559   60121 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:11:42.964342   60121 cri.go:89] found id: ""
	I1009 20:11:42.964371   60121 logs.go:282] 0 containers: []
	W1009 20:11:42.964381   60121 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:11:42.964389   60121 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:11:42.964437   60121 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:11:42.998524   60121 cri.go:89] found id: ""
	I1009 20:11:42.998557   60121 logs.go:282] 0 containers: []
	W1009 20:11:42.998577   60121 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:11:42.998585   60121 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:11:42.998653   60121 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:11:43.032312   60121 cri.go:89] found id: ""
	I1009 20:11:43.032341   60121 logs.go:282] 0 containers: []
	W1009 20:11:43.032352   60121 logs.go:284] No container was found matching "kindnet"
	I1009 20:11:43.032362   60121 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:11:43.032378   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:11:43.139786   60121 logs.go:123] Gathering logs for container status ...
	I1009 20:11:43.139822   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:11:43.179057   60121 logs.go:123] Gathering logs for kubelet ...
	I1009 20:11:43.179101   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:11:43.227703   60121 logs.go:123] Gathering logs for dmesg ...
	I1009 20:11:43.227738   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:11:43.243979   60121 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:11:43.244013   60121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:11:43.369960   60121 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1009 20:11:43.369990   60121 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:11:43.370037   60121 out.go:270] * 
	* 
	W1009 20:11:43.370094   60121 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:11:43.370112   60121 out.go:270] * 
	* 
	W1009 20:11:43.370926   60121 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:11:43.374087   60121 out.go:201] 
	W1009 20:11:43.375325   60121 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:11:43.375393   60121 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:11:43.375427   60121 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:11:43.376814   60121 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-169021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 6 (223.812336ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:11:43.642237   63215 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-169021" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (288.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-480205 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-480205 --alsologtostderr -v=3: exit status 82 (2m0.494314641s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-480205"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:09:21.733734   62030 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:09:21.733869   62030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:09:21.733879   62030 out.go:358] Setting ErrFile to fd 2...
	I1009 20:09:21.733885   62030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:09:21.734104   62030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:09:21.734322   62030 out.go:352] Setting JSON to false
	I1009 20:09:21.734390   62030 mustload.go:65] Loading cluster: no-preload-480205
	I1009 20:09:21.734735   62030 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:09:21.734800   62030 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/config.json ...
	I1009 20:09:21.734955   62030 mustload.go:65] Loading cluster: no-preload-480205
	I1009 20:09:21.735050   62030 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:09:21.735101   62030 stop.go:39] StopHost: no-preload-480205
	I1009 20:09:21.735648   62030 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:09:21.735699   62030 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:09:21.750540   62030 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46729
	I1009 20:09:21.750990   62030 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:09:21.751591   62030 main.go:141] libmachine: Using API Version  1
	I1009 20:09:21.751617   62030 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:09:21.751953   62030 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:09:21.754230   62030 out.go:177] * Stopping node "no-preload-480205"  ...
	I1009 20:09:21.755301   62030 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1009 20:09:21.755351   62030 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:09:21.755583   62030 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1009 20:09:21.755609   62030 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:09:21.758735   62030 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:09:21.759239   62030 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:08:14 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:09:21.759273   62030 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:09:21.759460   62030 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:09:21.759583   62030 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:09:21.759688   62030 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:09:21.759792   62030 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:09:21.850169   62030 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1009 20:09:21.908946   62030 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1009 20:09:21.983397   62030 main.go:141] libmachine: Stopping "no-preload-480205"...
	I1009 20:09:21.983435   62030 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:09:21.985277   62030 main.go:141] libmachine: (no-preload-480205) Calling .Stop
	I1009 20:09:21.988845   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 0/120
	I1009 20:09:22.990439   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 1/120
	I1009 20:09:23.991728   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 2/120
	I1009 20:09:24.993525   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 3/120
	I1009 20:09:25.994988   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 4/120
	I1009 20:09:26.997209   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 5/120
	I1009 20:09:27.998598   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 6/120
	I1009 20:09:28.999998   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 7/120
	I1009 20:09:30.002312   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 8/120
	I1009 20:09:31.004689   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 9/120
	I1009 20:09:32.006322   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 10/120
	I1009 20:09:33.007882   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 11/120
	I1009 20:09:34.009139   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 12/120
	I1009 20:09:35.010504   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 13/120
	I1009 20:09:36.012087   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 14/120
	I1009 20:09:37.014017   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 15/120
	I1009 20:09:38.016244   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 16/120
	I1009 20:09:39.017558   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 17/120
	I1009 20:09:40.018960   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 18/120
	I1009 20:09:41.020339   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 19/120
	I1009 20:09:42.022421   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 20/120
	I1009 20:09:43.023743   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 21/120
	I1009 20:09:44.025418   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 22/120
	I1009 20:09:45.026649   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 23/120
	I1009 20:09:46.027929   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 24/120
	I1009 20:09:47.030081   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 25/120
	I1009 20:09:48.031409   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 26/120
	I1009 20:09:49.033391   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 27/120
	I1009 20:09:50.034668   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 28/120
	I1009 20:09:51.036147   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 29/120
	I1009 20:09:52.037681   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 30/120
	I1009 20:09:53.038827   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 31/120
	I1009 20:09:54.040141   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 32/120
	I1009 20:09:55.041526   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 33/120
	I1009 20:09:56.043468   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 34/120
	I1009 20:09:57.045500   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 35/120
	I1009 20:09:58.047136   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 36/120
	I1009 20:09:59.048516   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 37/120
	I1009 20:10:00.050096   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 38/120
	I1009 20:10:01.051553   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 39/120
	I1009 20:10:02.053739   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 40/120
	I1009 20:10:03.055184   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 41/120
	I1009 20:10:04.056479   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 42/120
	I1009 20:10:05.057900   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 43/120
	I1009 20:10:06.059302   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 44/120
	I1009 20:10:07.060970   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 45/120
	I1009 20:10:08.062454   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 46/120
	I1009 20:10:09.064390   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 47/120
	I1009 20:10:10.065725   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 48/120
	I1009 20:10:11.067703   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 49/120
	I1009 20:10:12.069491   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 50/120
	I1009 20:10:13.070770   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 51/120
	I1009 20:10:14.072278   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 52/120
	I1009 20:10:15.073470   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 53/120
	I1009 20:10:16.074661   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 54/120
	I1009 20:10:17.076584   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 55/120
	I1009 20:10:18.077959   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 56/120
	I1009 20:10:19.079299   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 57/120
	I1009 20:10:20.081587   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 58/120
	I1009 20:10:21.082789   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 59/120
	I1009 20:10:22.084274   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 60/120
	I1009 20:10:23.085606   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 61/120
	I1009 20:10:24.086849   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 62/120
	I1009 20:10:25.088334   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 63/120
	I1009 20:10:26.089731   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 64/120
	I1009 20:10:27.091519   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 65/120
	I1009 20:10:28.092956   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 66/120
	I1009 20:10:29.094335   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 67/120
	I1009 20:10:30.095682   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 68/120
	I1009 20:10:31.096992   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 69/120
	I1009 20:10:32.099239   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 70/120
	I1009 20:10:33.100498   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 71/120
	I1009 20:10:34.101780   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 72/120
	I1009 20:10:35.103693   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 73/120
	I1009 20:10:36.104933   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 74/120
	I1009 20:10:37.106732   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 75/120
	I1009 20:10:38.108065   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 76/120
	I1009 20:10:39.109377   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 77/120
	I1009 20:10:40.110701   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 78/120
	I1009 20:10:41.111962   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 79/120
	I1009 20:10:42.113958   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 80/120
	I1009 20:10:43.115574   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 81/120
	I1009 20:10:44.117589   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 82/120
	I1009 20:10:45.119024   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 83/120
	I1009 20:10:46.120260   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 84/120
	I1009 20:10:47.122313   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 85/120
	I1009 20:10:48.124105   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 86/120
	I1009 20:10:49.125468   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 87/120
	I1009 20:10:50.126789   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 88/120
	I1009 20:10:51.128112   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 89/120
	I1009 20:10:52.130270   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 90/120
	I1009 20:10:53.131694   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 91/120
	I1009 20:10:54.133063   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 92/120
	I1009 20:10:55.135289   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 93/120
	I1009 20:10:56.136671   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 94/120
	I1009 20:10:57.138492   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 95/120
	I1009 20:10:58.139733   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 96/120
	I1009 20:10:59.141066   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 97/120
	I1009 20:11:00.142429   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 98/120
	I1009 20:11:01.143761   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 99/120
	I1009 20:11:02.145699   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 100/120
	I1009 20:11:03.146906   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 101/120
	I1009 20:11:04.148182   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 102/120
	I1009 20:11:05.149284   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 103/120
	I1009 20:11:06.150722   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 104/120
	I1009 20:11:07.152579   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 105/120
	I1009 20:11:08.153850   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 106/120
	I1009 20:11:09.155011   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 107/120
	I1009 20:11:10.156476   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 108/120
	I1009 20:11:11.157675   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 109/120
	I1009 20:11:12.159707   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 110/120
	I1009 20:11:13.160931   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 111/120
	I1009 20:11:14.162205   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 112/120
	I1009 20:11:15.163859   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 113/120
	I1009 20:11:16.165376   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 114/120
	I1009 20:11:17.167432   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 115/120
	I1009 20:11:18.168862   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 116/120
	I1009 20:11:19.170090   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 117/120
	I1009 20:11:20.171371   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 118/120
	I1009 20:11:21.172738   62030 main.go:141] libmachine: (no-preload-480205) Waiting for machine to stop 119/120
	I1009 20:11:22.174047   62030 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1009 20:11:22.174129   62030 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1009 20:11:22.176301   62030 out.go:201] 
	W1009 20:11:22.177702   62030 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1009 20:11:22.177720   62030 out.go:270] * 
	* 
	W1009 20:11:22.180496   62030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:11:22.182762   62030 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-480205 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205: exit status 3 (18.437821786s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:11:40.623448   63060 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1009 20:11:40.623470   63060 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-480205" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-503330 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-503330 --alsologtostderr -v=3: exit status 82 (2m0.488462088s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-503330"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:10:06.912178   62649 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:10:06.912294   62649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:10:06.912301   62649 out.go:358] Setting ErrFile to fd 2...
	I1009 20:10:06.912306   62649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:10:06.912470   62649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:10:06.912697   62649 out.go:352] Setting JSON to false
	I1009 20:10:06.912767   62649 mustload.go:65] Loading cluster: embed-certs-503330
	I1009 20:10:06.913149   62649 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:10:06.913220   62649 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/config.json ...
	I1009 20:10:06.913396   62649 mustload.go:65] Loading cluster: embed-certs-503330
	I1009 20:10:06.913497   62649 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:10:06.913534   62649 stop.go:39] StopHost: embed-certs-503330
	I1009 20:10:06.913874   62649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:10:06.913918   62649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:10:06.928544   62649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45907
	I1009 20:10:06.928964   62649 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:10:06.929456   62649 main.go:141] libmachine: Using API Version  1
	I1009 20:10:06.929479   62649 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:10:06.929861   62649 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:10:06.932032   62649 out.go:177] * Stopping node "embed-certs-503330"  ...
	I1009 20:10:06.933258   62649 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1009 20:10:06.933296   62649 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:10:06.933504   62649 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1009 20:10:06.933535   62649 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:10:06.936266   62649 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:10:06.936666   62649 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:08:39 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:10:06.936702   62649 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:10:06.936860   62649 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:10:06.937008   62649 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:10:06.937136   62649 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:10:06.937240   62649 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:10:07.029357   62649 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1009 20:10:07.095776   62649 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1009 20:10:07.157264   62649 main.go:141] libmachine: Stopping "embed-certs-503330"...
	I1009 20:10:07.157316   62649 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:10:07.158712   62649 main.go:141] libmachine: (embed-certs-503330) Calling .Stop
	I1009 20:10:07.161891   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 0/120
	I1009 20:10:08.163279   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 1/120
	I1009 20:10:09.164595   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 2/120
	I1009 20:10:10.165969   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 3/120
	I1009 20:10:11.167412   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 4/120
	I1009 20:10:12.169434   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 5/120
	I1009 20:10:13.170874   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 6/120
	I1009 20:10:14.172112   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 7/120
	I1009 20:10:15.173266   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 8/120
	I1009 20:10:16.174521   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 9/120
	I1009 20:10:17.176463   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 10/120
	I1009 20:10:18.178114   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 11/120
	I1009 20:10:19.179401   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 12/120
	I1009 20:10:20.180682   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 13/120
	I1009 20:10:21.181899   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 14/120
	I1009 20:10:22.183688   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 15/120
	I1009 20:10:23.185353   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 16/120
	I1009 20:10:24.186662   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 17/120
	I1009 20:10:25.188013   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 18/120
	I1009 20:10:26.189310   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 19/120
	I1009 20:10:27.191504   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 20/120
	I1009 20:10:28.192786   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 21/120
	I1009 20:10:29.194178   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 22/120
	I1009 20:10:30.195470   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 23/120
	I1009 20:10:31.197082   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 24/120
	I1009 20:10:32.199051   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 25/120
	I1009 20:10:33.200366   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 26/120
	I1009 20:10:34.201791   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 27/120
	I1009 20:10:35.203195   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 28/120
	I1009 20:10:36.204502   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 29/120
	I1009 20:10:37.206598   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 30/120
	I1009 20:10:38.207933   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 31/120
	I1009 20:10:39.209385   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 32/120
	I1009 20:10:40.211134   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 33/120
	I1009 20:10:41.212357   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 34/120
	I1009 20:10:42.214343   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 35/120
	I1009 20:10:43.215536   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 36/120
	I1009 20:10:44.216845   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 37/120
	I1009 20:10:45.218198   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 38/120
	I1009 20:10:46.219603   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 39/120
	I1009 20:10:47.221656   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 40/120
	I1009 20:10:48.222965   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 41/120
	I1009 20:10:49.224204   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 42/120
	I1009 20:10:50.225319   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 43/120
	I1009 20:10:51.226554   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 44/120
	I1009 20:10:52.228275   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 45/120
	I1009 20:10:53.229541   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 46/120
	I1009 20:10:54.231199   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 47/120
	I1009 20:10:55.232480   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 48/120
	I1009 20:10:56.233839   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 49/120
	I1009 20:10:57.235718   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 50/120
	I1009 20:10:58.237035   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 51/120
	I1009 20:10:59.238245   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 52/120
	I1009 20:11:00.239799   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 53/120
	I1009 20:11:01.241052   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 54/120
	I1009 20:11:02.242757   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 55/120
	I1009 20:11:03.244111   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 56/120
	I1009 20:11:04.245385   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 57/120
	I1009 20:11:05.246772   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 58/120
	I1009 20:11:06.248414   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 59/120
	I1009 20:11:07.250562   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 60/120
	I1009 20:11:08.251875   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 61/120
	I1009 20:11:09.253480   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 62/120
	I1009 20:11:10.254801   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 63/120
	I1009 20:11:11.256024   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 64/120
	I1009 20:11:12.258078   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 65/120
	I1009 20:11:13.259344   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 66/120
	I1009 20:11:14.260646   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 67/120
	I1009 20:11:15.261827   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 68/120
	I1009 20:11:16.263468   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 69/120
	I1009 20:11:17.265880   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 70/120
	I1009 20:11:18.267647   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 71/120
	I1009 20:11:19.268929   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 72/120
	I1009 20:11:20.270601   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 73/120
	I1009 20:11:21.271905   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 74/120
	I1009 20:11:22.273286   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 75/120
	I1009 20:11:23.274704   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 76/120
	I1009 20:11:24.276258   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 77/120
	I1009 20:11:25.277747   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 78/120
	I1009 20:11:26.279229   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 79/120
	I1009 20:11:27.281314   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 80/120
	I1009 20:11:28.282857   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 81/120
	I1009 20:11:29.284524   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 82/120
	I1009 20:11:30.285874   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 83/120
	I1009 20:11:31.287454   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 84/120
	I1009 20:11:32.288814   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 85/120
	I1009 20:11:33.290099   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 86/120
	I1009 20:11:34.291399   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 87/120
	I1009 20:11:35.292663   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 88/120
	I1009 20:11:36.294147   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 89/120
	I1009 20:11:37.296173   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 90/120
	I1009 20:11:38.297575   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 91/120
	I1009 20:11:39.298945   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 92/120
	I1009 20:11:40.300325   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 93/120
	I1009 20:11:41.301656   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 94/120
	I1009 20:11:42.303593   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 95/120
	I1009 20:11:43.305776   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 96/120
	I1009 20:11:44.307176   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 97/120
	I1009 20:11:45.308632   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 98/120
	I1009 20:11:46.310092   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 99/120
	I1009 20:11:47.312360   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 100/120
	I1009 20:11:48.313751   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 101/120
	I1009 20:11:49.315104   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 102/120
	I1009 20:11:50.316341   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 103/120
	I1009 20:11:51.317936   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 104/120
	I1009 20:11:52.319878   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 105/120
	I1009 20:11:53.321299   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 106/120
	I1009 20:11:54.322527   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 107/120
	I1009 20:11:55.323855   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 108/120
	I1009 20:11:56.325514   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 109/120
	I1009 20:11:57.327867   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 110/120
	I1009 20:11:58.329160   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 111/120
	I1009 20:11:59.330793   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 112/120
	I1009 20:12:00.332517   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 113/120
	I1009 20:12:01.333811   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 114/120
	I1009 20:12:02.335859   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 115/120
	I1009 20:12:03.337312   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 116/120
	I1009 20:12:04.338722   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 117/120
	I1009 20:12:05.340045   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 118/120
	I1009 20:12:06.341446   62649 main.go:141] libmachine: (embed-certs-503330) Waiting for machine to stop 119/120
	I1009 20:12:07.342217   62649 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1009 20:12:07.342285   62649 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1009 20:12:07.343991   62649 out.go:201] 
	W1009 20:12:07.345425   62649 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1009 20:12:07.345451   62649 out.go:270] * 
	* 
	W1009 20:12:07.348164   62649 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:12:07.349340   62649 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-503330 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330: exit status 3 (18.584409177s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:12:25.935356   63518 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.97:22: connect: no route to host
	E1009 20:12:25.935377   63518 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.97:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-503330" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-733270 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-733270 --alsologtostderr -v=3: exit status 82 (2m0.489155862s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-733270"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:11:06.020014   62991 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:11:06.020118   62991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:11:06.020126   62991 out.go:358] Setting ErrFile to fd 2...
	I1009 20:11:06.020130   62991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:11:06.020287   62991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:11:06.020505   62991 out.go:352] Setting JSON to false
	I1009 20:11:06.020572   62991 mustload.go:65] Loading cluster: default-k8s-diff-port-733270
	I1009 20:11:06.020904   62991 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:11:06.020967   62991 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/config.json ...
	I1009 20:11:06.021117   62991 mustload.go:65] Loading cluster: default-k8s-diff-port-733270
	I1009 20:11:06.021213   62991 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:11:06.021237   62991 stop.go:39] StopHost: default-k8s-diff-port-733270
	I1009 20:11:06.021605   62991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:11:06.021644   62991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:11:06.035995   62991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
	I1009 20:11:06.036447   62991 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:11:06.036930   62991 main.go:141] libmachine: Using API Version  1
	I1009 20:11:06.036955   62991 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:11:06.037278   62991 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:11:06.039599   62991 out.go:177] * Stopping node "default-k8s-diff-port-733270"  ...
	I1009 20:11:06.040938   62991 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1009 20:11:06.040964   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:11:06.041148   62991 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1009 20:11:06.041175   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:11:06.044073   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:11:06.044568   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:09:44 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:11:06.044594   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:11:06.044765   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:11:06.044953   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:11:06.045136   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:11:06.045263   62991 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:11:06.139217   62991 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1009 20:11:06.209491   62991 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1009 20:11:06.272401   62991 main.go:141] libmachine: Stopping "default-k8s-diff-port-733270"...
	I1009 20:11:06.272431   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:11:06.273740   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Stop
	I1009 20:11:06.276924   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 0/120
	I1009 20:11:07.278199   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 1/120
	I1009 20:11:08.279369   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 2/120
	I1009 20:11:09.280601   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 3/120
	I1009 20:11:10.281907   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 4/120
	I1009 20:11:11.284251   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 5/120
	I1009 20:11:12.285550   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 6/120
	I1009 20:11:13.286632   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 7/120
	I1009 20:11:14.288033   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 8/120
	I1009 20:11:15.289667   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 9/120
	I1009 20:11:16.290969   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 10/120
	I1009 20:11:17.292381   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 11/120
	I1009 20:11:18.293595   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 12/120
	I1009 20:11:19.295042   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 13/120
	I1009 20:11:20.296280   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 14/120
	I1009 20:11:21.298296   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 15/120
	I1009 20:11:22.299322   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 16/120
	I1009 20:11:23.300736   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 17/120
	I1009 20:11:24.301976   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 18/120
	I1009 20:11:25.303389   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 19/120
	I1009 20:11:26.305457   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 20/120
	I1009 20:11:27.306690   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 21/120
	I1009 20:11:28.308097   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 22/120
	I1009 20:11:29.309373   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 23/120
	I1009 20:11:30.310649   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 24/120
	I1009 20:11:31.312591   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 25/120
	I1009 20:11:32.314956   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 26/120
	I1009 20:11:33.316300   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 27/120
	I1009 20:11:34.317673   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 28/120
	I1009 20:11:35.318898   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 29/120
	I1009 20:11:36.320232   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 30/120
	I1009 20:11:37.321480   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 31/120
	I1009 20:11:38.322747   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 32/120
	I1009 20:11:39.323914   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 33/120
	I1009 20:11:40.325371   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 34/120
	I1009 20:11:41.327224   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 35/120
	I1009 20:11:42.328398   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 36/120
	I1009 20:11:43.329845   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 37/120
	I1009 20:11:44.331337   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 38/120
	I1009 20:11:45.333476   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 39/120
	I1009 20:11:46.335870   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 40/120
	I1009 20:11:47.337153   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 41/120
	I1009 20:11:48.338717   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 42/120
	I1009 20:11:49.340003   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 43/120
	I1009 20:11:50.341334   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 44/120
	I1009 20:11:51.343325   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 45/120
	I1009 20:11:52.345475   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 46/120
	I1009 20:11:53.346708   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 47/120
	I1009 20:11:54.347894   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 48/120
	I1009 20:11:55.349378   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 49/120
	I1009 20:11:56.351318   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 50/120
	I1009 20:11:57.352485   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 51/120
	I1009 20:11:58.353709   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 52/120
	I1009 20:11:59.354997   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 53/120
	I1009 20:12:00.356435   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 54/120
	I1009 20:12:01.358300   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 55/120
	I1009 20:12:02.359503   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 56/120
	I1009 20:12:03.360626   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 57/120
	I1009 20:12:04.361646   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 58/120
	I1009 20:12:05.363029   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 59/120
	I1009 20:12:06.365032   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 60/120
	I1009 20:12:07.366593   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 61/120
	I1009 20:12:08.367924   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 62/120
	I1009 20:12:09.369185   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 63/120
	I1009 20:12:10.370438   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 64/120
	I1009 20:12:11.372355   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 65/120
	I1009 20:12:12.373772   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 66/120
	I1009 20:12:13.374958   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 67/120
	I1009 20:12:14.376341   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 68/120
	I1009 20:12:15.377900   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 69/120
	I1009 20:12:16.380038   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 70/120
	I1009 20:12:17.381722   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 71/120
	I1009 20:12:18.383237   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 72/120
	I1009 20:12:19.384586   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 73/120
	I1009 20:12:20.385946   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 74/120
	I1009 20:12:21.387877   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 75/120
	I1009 20:12:22.389156   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 76/120
	I1009 20:12:23.390343   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 77/120
	I1009 20:12:24.391692   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 78/120
	I1009 20:12:25.393676   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 79/120
	I1009 20:12:26.395754   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 80/120
	I1009 20:12:27.397038   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 81/120
	I1009 20:12:28.398457   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 82/120
	I1009 20:12:29.399829   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 83/120
	I1009 20:12:30.401091   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 84/120
	I1009 20:12:31.403180   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 85/120
	I1009 20:12:32.404256   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 86/120
	I1009 20:12:33.405580   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 87/120
	I1009 20:12:34.406911   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 88/120
	I1009 20:12:35.407999   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 89/120
	I1009 20:12:36.410034   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 90/120
	I1009 20:12:37.411560   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 91/120
	I1009 20:12:38.413275   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 92/120
	I1009 20:12:39.414735   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 93/120
	I1009 20:12:40.416002   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 94/120
	I1009 20:12:41.417955   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 95/120
	I1009 20:12:42.419358   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 96/120
	I1009 20:12:43.420747   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 97/120
	I1009 20:12:44.422016   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 98/120
	I1009 20:12:45.423382   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 99/120
	I1009 20:12:46.425597   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 100/120
	I1009 20:12:47.426824   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 101/120
	I1009 20:12:48.428275   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 102/120
	I1009 20:12:49.429548   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 103/120
	I1009 20:12:50.431220   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 104/120
	I1009 20:12:51.433148   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 105/120
	I1009 20:12:52.434594   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 106/120
	I1009 20:12:53.435787   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 107/120
	I1009 20:12:54.437078   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 108/120
	I1009 20:12:55.438525   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 109/120
	I1009 20:12:56.440465   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 110/120
	I1009 20:12:57.441678   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 111/120
	I1009 20:12:58.442874   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 112/120
	I1009 20:12:59.444176   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 113/120
	I1009 20:13:00.445345   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 114/120
	I1009 20:13:01.447375   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 115/120
	I1009 20:13:02.448739   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 116/120
	I1009 20:13:03.450021   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 117/120
	I1009 20:13:04.451379   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 118/120
	I1009 20:13:05.452755   62991 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for machine to stop 119/120
	I1009 20:13:06.453994   62991 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1009 20:13:06.454061   62991 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1009 20:13:06.455996   62991 out.go:201] 
	W1009 20:13:06.457528   62991 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1009 20:13:06.457559   62991 out.go:270] * 
	* 
	W1009 20:13:06.460062   62991 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:13:06.461420   62991 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-733270 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270: exit status 3 (18.60808612s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:13:25.071372   63885 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.134:22: connect: no route to host
	E1009 20:13:25.071392   63885 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.134:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-733270" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205: exit status 3 (3.172451985s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:11:43.795358   63182 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1009 20:11:43.795378   63182 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-480205 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-480205 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.148838362s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-480205 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205: exit status 3 (3.062305891s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:11:53.007491   63381 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host
	E1009 20:11:53.007510   63381 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.162:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-480205" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-169021 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-169021 create -f testdata/busybox.yaml: exit status 1 (41.821235ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-169021" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-169021 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 6 (217.188592ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:11:43.902257   63255 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-169021" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 6 (228.712552ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:11:44.129425   63304 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-169021" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-169021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-169021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m56.619698131s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-169021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-169021 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-169021 describe deploy/metrics-server -n kube-system: exit status 1 (41.51847ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-169021" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-169021 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 6 (216.238775ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:13:41.009010   64169 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-169021" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330: exit status 3 (3.168004365s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:12:29.103423   63632 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.97:22: connect: no route to host
	E1009 20:12:29.103443   63632 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.97:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-503330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-503330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152143646s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.97:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-503330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330: exit status 3 (3.063596405s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:12:38.319405   63712 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.97:22: connect: no route to host
	E1009 20:12:38.319427   63712 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.97:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-503330" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270: exit status 3 (3.167869932s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:13:28.239365   63983 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.134:22: connect: no route to host
	E1009 20:13:28.239387   63983 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.134:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-733270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-733270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152130665s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.134:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-733270 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270: exit status 3 (3.063651821s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 20:13:37.455394   64064 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.134:22: connect: no route to host
	E1009 20:13:37.455418   64064 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.134:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-733270" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (721.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-169021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1009 20:14:51.613120   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:14:51.908836   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:16:14.984835   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:19:51.613619   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:19:51.908372   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:21:14.685658   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-169021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m58.451662917s)

                                                
                                                
-- stdout --
	* [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-169021" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:13:44.614940   64287 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:13:44.615052   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615076   64287 out.go:358] Setting ErrFile to fd 2...
	I1009 20:13:44.615081   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615239   64287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:13:44.615728   64287 out.go:352] Setting JSON to false
	I1009 20:13:44.616598   64287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6966,"bootTime":1728497859,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:13:44.616678   64287 start.go:139] virtualization: kvm guest
	I1009 20:13:44.618709   64287 out.go:177] * [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:13:44.619813   64287 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:13:44.619841   64287 notify.go:220] Checking for updates...
	I1009 20:13:44.621876   64287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:13:44.623226   64287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:13:44.624576   64287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:13:44.625863   64287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:13:44.627027   64287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:13:44.628559   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:13:44.628948   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.629014   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.644138   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I1009 20:13:44.644537   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.645045   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.645067   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.645380   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.645557   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.647115   64287 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 20:13:44.648228   64287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:13:44.648491   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.648529   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.663211   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I1009 20:13:44.663674   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.664164   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.664192   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.664482   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.664648   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.697395   64287 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:13:44.698580   64287 start.go:297] selected driver: kvm2
	I1009 20:13:44.698591   64287 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.698719   64287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:13:44.699437   64287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.699521   64287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:13:44.713190   64287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:13:44.713567   64287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:13:44.713600   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:13:44.713640   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:13:44.713673   64287 start.go:340] cluster config:
	{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.713805   64287 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.716209   64287 out.go:177] * Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	I1009 20:13:44.717364   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:13:44.717399   64287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:13:44.717409   64287 cache.go:56] Caching tarball of preloaded images
	I1009 20:13:44.717485   64287 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:13:44.717495   64287 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:13:44.717594   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:13:44.717753   64287 start.go:360] acquireMachinesLock for old-k8s-version-169021: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:17:12.719812   64287 start.go:364] duration metric: took 3m28.002029987s to acquireMachinesLock for "old-k8s-version-169021"
	I1009 20:17:12.719868   64287 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:12.719874   64287 fix.go:54] fixHost starting: 
	I1009 20:17:12.720288   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:12.720338   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:12.736888   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I1009 20:17:12.737330   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:12.737796   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:17:12.737818   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:12.738095   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:12.738284   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:12.738407   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetState
	I1009 20:17:12.740019   64287 fix.go:112] recreateIfNeeded on old-k8s-version-169021: state=Stopped err=<nil>
	I1009 20:17:12.740056   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	W1009 20:17:12.740218   64287 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:12.741971   64287 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-169021" ...
	I1009 20:17:12.743057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .Start
	I1009 20:17:12.743249   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring networks are active...
	I1009 20:17:12.743940   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network default is active
	I1009 20:17:12.744263   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network mk-old-k8s-version-169021 is active
	I1009 20:17:12.744639   64287 main.go:141] libmachine: (old-k8s-version-169021) Getting domain xml...
	I1009 20:17:12.745331   64287 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:17:14.013679   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting to get IP...
	I1009 20:17:14.014647   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.015019   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.015101   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.015007   65185 retry.go:31] will retry after 236.047931ms: waiting for machine to come up
	I1009 20:17:14.252239   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.252610   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.252636   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.252568   65185 retry.go:31] will retry after 325.864911ms: waiting for machine to come up
	I1009 20:17:14.580315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.580940   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.580965   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.580878   65185 retry.go:31] will retry after 366.421043ms: waiting for machine to come up
	I1009 20:17:14.949258   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.949766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.949800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.949726   65185 retry.go:31] will retry after 498.276481ms: waiting for machine to come up
	I1009 20:17:15.450160   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:15.450601   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:15.450635   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:15.450548   65185 retry.go:31] will retry after 742.118922ms: waiting for machine to come up
	I1009 20:17:16.194707   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.195193   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.195232   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.195137   65185 retry.go:31] will retry after 583.713263ms: waiting for machine to come up
	I1009 20:17:16.780844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.781277   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.781302   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.781215   65185 retry.go:31] will retry after 936.435146ms: waiting for machine to come up
	I1009 20:17:17.719083   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:17.719558   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:17.719588   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:17.719503   65185 retry.go:31] will retry after 1.046822117s: waiting for machine to come up
	I1009 20:17:18.768306   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:18.768844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:18.768872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:18.768798   65185 retry.go:31] will retry after 1.362599959s: waiting for machine to come up
	I1009 20:17:20.133416   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:20.133841   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:20.133872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:20.133789   65185 retry.go:31] will retry after 1.900366713s: waiting for machine to come up
	I1009 20:17:22.036076   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:22.036465   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:22.036499   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:22.036421   65185 retry.go:31] will retry after 2.419471311s: waiting for machine to come up
	I1009 20:17:24.458015   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:24.458410   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:24.458441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:24.458379   65185 retry.go:31] will retry after 2.284501028s: waiting for machine to come up
	I1009 20:17:26.744084   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:26.744443   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:26.744468   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:26.744421   65185 retry.go:31] will retry after 2.772640247s: waiting for machine to come up
	I1009 20:17:29.519542   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:29.519877   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:29.519897   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:29.519854   65185 retry.go:31] will retry after 5.534511505s: waiting for machine to come up
	I1009 20:17:35.056703   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057338   64287 main.go:141] libmachine: (old-k8s-version-169021) Found IP for machine: 192.168.61.119
	I1009 20:17:35.057370   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has current primary IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057378   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserving static IP address...
	I1009 20:17:35.057996   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.058019   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserved static IP address: 192.168.61.119
	I1009 20:17:35.058036   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | skip adding static IP to network mk-old-k8s-version-169021 - found existing host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"}
	I1009 20:17:35.058052   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Getting to WaitForSSH function...
	I1009 20:17:35.058069   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting for SSH to be available...
	I1009 20:17:35.060324   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060560   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.060586   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060678   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH client type: external
	I1009 20:17:35.060702   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa (-rw-------)
	I1009 20:17:35.060735   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:35.060750   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | About to run SSH command:
	I1009 20:17:35.060766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | exit 0
	I1009 20:17:35.183369   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:35.183732   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:17:35.184294   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.186404   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186691   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.186728   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186912   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:17:35.187139   64287 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:35.187158   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:35.187361   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.189504   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189784   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.189814   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189904   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.190057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190169   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190309   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.190422   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.190610   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.190626   64287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:35.295510   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:35.295543   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295782   64287 buildroot.go:166] provisioning hostname "old-k8s-version-169021"
	I1009 20:17:35.295804   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295994   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.298548   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.298930   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.298964   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.299120   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.299266   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299418   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299547   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.299737   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.299899   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.299912   64287 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169021 && echo "old-k8s-version-169021" | sudo tee /etc/hostname
	I1009 20:17:35.426217   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169021
	
	I1009 20:17:35.426246   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.428993   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.429348   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429554   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.429728   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.429885   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.430012   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.430164   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.430365   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.430391   64287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:35.544070   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:35.544098   64287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:35.544136   64287 buildroot.go:174] setting up certificates
	I1009 20:17:35.544146   64287 provision.go:84] configureAuth start
	I1009 20:17:35.544155   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.544420   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.547109   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547419   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.547451   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547618   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.549441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549724   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.549757   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549894   64287 provision.go:143] copyHostCerts
	I1009 20:17:35.549945   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:35.549955   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:35.550007   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:35.550109   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:35.550119   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:35.550139   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:35.550201   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:35.550207   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:35.550224   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:35.550274   64287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169021 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-169021]
	I1009 20:17:35.892413   64287 provision.go:177] copyRemoteCerts
	I1009 20:17:35.892470   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:35.892492   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.894921   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895231   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.895262   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895409   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.895585   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.895750   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.895870   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:35.978537   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:36.003667   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:17:36.029724   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:36.053321   64287 provision.go:87] duration metric: took 509.163583ms to configureAuth
	I1009 20:17:36.053347   64287 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:36.053517   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:17:36.053589   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.056411   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.056740   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.056769   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.057023   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.057214   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057396   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057533   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.057684   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.057847   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.057862   64287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:36.281284   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:36.281316   64287 machine.go:96] duration metric: took 1.094164441s to provisionDockerMachine
	I1009 20:17:36.281327   64287 start.go:293] postStartSetup for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:17:36.281339   64287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:36.281386   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.281686   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:36.281711   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.284445   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.284825   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284990   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.285132   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.285255   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.285405   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.370146   64287 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:36.374951   64287 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:36.374972   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:36.375040   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:36.375158   64287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:36.375286   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:36.384857   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:36.407811   64287 start.go:296] duration metric: took 126.472907ms for postStartSetup
	I1009 20:17:36.407852   64287 fix.go:56] duration metric: took 23.68797707s for fixHost
	I1009 20:17:36.407875   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.410584   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.410949   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.410979   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.411118   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.411292   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411461   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411593   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.411768   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.411943   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.411966   64287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:36.519849   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505056.472929841
	
	I1009 20:17:36.519877   64287 fix.go:216] guest clock: 1728505056.472929841
	I1009 20:17:36.519887   64287 fix.go:229] Guest: 2024-10-09 20:17:36.472929841 +0000 UTC Remote: 2024-10-09 20:17:36.407856716 +0000 UTC m=+231.827419064 (delta=65.073125ms)
	I1009 20:17:36.519944   64287 fix.go:200] guest clock delta is within tolerance: 65.073125ms
	I1009 20:17:36.519956   64287 start.go:83] releasing machines lock for "old-k8s-version-169021", held for 23.800110205s
	I1009 20:17:36.520000   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.520321   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:36.523296   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523653   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.523701   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523890   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524453   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524658   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524781   64287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:36.524822   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.524855   64287 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:36.524883   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.527948   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528030   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528336   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528362   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528389   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528414   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528670   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528681   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528874   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.528880   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.529031   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529035   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529170   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.529191   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.634262   64287 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:36.640126   64287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:36.794481   64287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:36.801536   64287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:36.801615   64287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:36.825211   64287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:36.825237   64287 start.go:495] detecting cgroup driver to use...
	I1009 20:17:36.825299   64287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:36.842016   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:36.861052   64287 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:36.861112   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:36.878185   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:36.892044   64287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:37.010989   64287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:37.181313   64287 docker.go:233] disabling docker service ...
	I1009 20:17:37.181373   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:37.201726   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:37.218403   64287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:37.330869   64287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:37.458670   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:37.474832   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:37.496062   64287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:17:37.496111   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.509926   64287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:37.509984   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.527671   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.543857   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.554871   64287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:37.566057   64287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:37.578675   64287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:37.578757   64287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:37.593475   64287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:37.608210   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:37.756273   64287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:37.857693   64287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:37.857759   64287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:37.863522   64287 start.go:563] Will wait 60s for crictl version
	I1009 20:17:37.863561   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:37.868216   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:37.908445   64287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:37.908519   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.939400   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.971447   64287 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:17:37.972687   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:37.975928   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976352   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:37.976382   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976637   64287 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:37.980809   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:37.993206   64287 kubeadm.go:883] updating cluster {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:37.993359   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:17:37.993402   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:38.043755   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:38.043813   64287 ssh_runner.go:195] Run: which lz4
	I1009 20:17:38.048189   64287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:38.052553   64287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:38.052584   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:17:39.710338   64287 crio.go:462] duration metric: took 1.662187364s to copy over tarball
	I1009 20:17:39.710411   64287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:42.694067   64287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.983621241s)
	I1009 20:17:42.694097   64287 crio.go:469] duration metric: took 2.98372831s to extract the tarball
	I1009 20:17:42.694106   64287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:42.739749   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:42.782349   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:42.782374   64287 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:42.782447   64287 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.782474   64287 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.782512   64287 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.782544   64287 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:17:42.782549   64287 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.782732   64287 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.782486   64287 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.782788   64287 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.784992   64287 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.785024   64287 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.784995   64287 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.785000   64287 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.785007   64287 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.785070   64287 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:17:42.785030   64287 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.785471   64287 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.936283   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.937808   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.960488   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.971814   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:17:42.977796   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.004153   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.014701   64287 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:17:43.014748   64287 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.014795   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.025133   64287 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:17:43.025170   64287 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.025204   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086484   64287 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:17:43.086512   64287 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:17:43.086532   64287 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.086541   64287 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:17:43.086579   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086581   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.097814   64287 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:17:43.097859   64287 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.097909   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103497   64287 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:17:43.103532   64287 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.103548   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.103569   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103677   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.103745   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.103799   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.105640   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.203854   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.220635   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.220670   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.220793   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.232794   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.232901   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.232905   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.389992   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.390038   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.389991   64287 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:17:43.390081   64287 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.390097   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.390112   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.390166   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.390187   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.390247   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.475244   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:17:43.536485   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:17:43.536569   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.538738   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:17:43.538812   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:17:43.538863   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:17:43.538880   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.597357   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:17:43.597449   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.630702   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.668841   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:17:44.007657   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:44.151174   64287 cache_images.go:92] duration metric: took 1.368780539s to LoadCachedImages
	W1009 20:17:44.151263   64287 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1009 20:17:44.151285   64287 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1009 20:17:44.151432   64287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-169021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:44.151500   64287 ssh_runner.go:195] Run: crio config
	I1009 20:17:44.208126   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:17:44.208148   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:44.208165   64287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:44.208183   64287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169021 NodeName:old-k8s-version-169021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:17:44.208361   64287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-169021"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:44.208437   64287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:17:44.218743   64287 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:44.218813   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:44.228160   64287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 20:17:44.245304   64287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:44.262787   64287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:17:44.280742   64287 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:44.285502   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:44.299434   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:44.427216   64287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:44.445239   64287 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021 for IP: 192.168.61.119
	I1009 20:17:44.445262   64287 certs.go:194] generating shared ca certs ...
	I1009 20:17:44.445282   64287 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:44.445454   64287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:44.445516   64287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:44.445538   64287 certs.go:256] generating profile certs ...
	I1009 20:17:44.445663   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key
	I1009 20:17:44.445728   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192
	I1009 20:17:44.445780   64287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key
	I1009 20:17:44.445920   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:44.445961   64287 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:44.445976   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:44.446008   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:44.446041   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:44.446074   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:44.446130   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:44.446993   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:44.498205   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:44.525945   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:44.572216   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:44.614281   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:17:44.661644   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:44.695246   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:44.719043   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:44.743825   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:44.768013   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:44.793698   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:44.819945   64287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:44.840340   64287 ssh_runner.go:195] Run: openssl version
	I1009 20:17:44.847883   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:44.858853   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863657   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863707   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.871190   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:44.885414   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:44.900030   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904894   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904958   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.912406   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:44.925128   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:44.936358   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940937   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940995   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.946995   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:44.958154   64287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:44.962846   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:44.968749   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:44.974659   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:44.980867   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:44.986827   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:44.992741   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:44.998932   64287 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:44.999030   64287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:44.999107   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.037766   64287 cri.go:89] found id: ""
	I1009 20:17:45.037847   64287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:45.050640   64287 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:45.050661   64287 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:45.050717   64287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:45.061420   64287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:45.062835   64287 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:17:45.063886   64287 kubeconfig.go:62] /home/jenkins/minikube-integration/19780-9412/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169021" cluster setting kubeconfig missing "old-k8s-version-169021" context setting]
	I1009 20:17:45.065224   64287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:45.137319   64287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:45.149285   64287 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1009 20:17:45.149318   64287 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:45.149331   64287 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:45.149386   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.191415   64287 cri.go:89] found id: ""
	I1009 20:17:45.191494   64287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:45.208982   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:45.219143   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:45.219166   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:45.219219   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:17:45.229113   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:45.229199   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:45.239745   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:17:45.249766   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:45.249844   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:45.260185   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.271441   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:45.271500   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.281343   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:17:45.291026   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:45.291094   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:45.301052   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:45.311369   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:45.520151   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.097892   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.359594   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.466328   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.574255   64287 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:46.574365   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.574634   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.074595   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.575187   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.074428   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:50.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:50.575160   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.075457   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.574838   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.075036   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.075071   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.575204   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.074552   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.574415   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:55.074932   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:55.575354   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.074536   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.575341   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.074580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.574737   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.074743   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.574712   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.074570   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.575178   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:00.075413   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:00.575344   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.074463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.574495   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.075077   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.074427   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.574544   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.075436   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.575477   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:05.075031   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:05.574523   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.075121   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.575359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.074417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.574532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.075315   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.575052   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.075089   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.575013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.075093   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.574417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.075214   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.574669   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.075388   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.575377   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.075087   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.574793   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.074494   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.574845   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.074778   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.575349   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.074510   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.074650   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.574725   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.075359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.575302   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.074611   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.575097   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:20.075155   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:20.575362   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.074859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.574637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.074532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.574916   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.075357   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.574640   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.074579   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.574711   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:25.075032   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:25.575412   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.075470   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.574434   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.074827   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.074653   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.575222   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.075440   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.575192   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:30.075304   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:30.574688   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.075159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.574404   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.074889   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.575136   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.074459   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.574779   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.074797   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.574832   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:35.074501   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:35.574403   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.075399   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.575034   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.074714   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.574446   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.074619   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.574644   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.074530   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.574700   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:40.074863   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:40.575174   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.075008   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.574859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.074972   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.574851   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.074805   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.575033   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.074718   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.575423   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:45.074591   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:45.575195   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.075303   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.575186   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:46.575288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:46.614320   64287 cri.go:89] found id: ""
	I1009 20:18:46.614343   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.614351   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:46.614357   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:46.614402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:46.646355   64287 cri.go:89] found id: ""
	I1009 20:18:46.646384   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.646395   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:46.646403   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:46.646450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:46.678758   64287 cri.go:89] found id: ""
	I1009 20:18:46.678788   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.678798   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:46.678805   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:46.678859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:46.721469   64287 cri.go:89] found id: ""
	I1009 20:18:46.721496   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.721507   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:46.721514   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:46.721573   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:46.759822   64287 cri.go:89] found id: ""
	I1009 20:18:46.759853   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.759861   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:46.759866   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:46.759923   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:46.798221   64287 cri.go:89] found id: ""
	I1009 20:18:46.798250   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.798261   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:46.798268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:46.798327   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:46.832044   64287 cri.go:89] found id: ""
	I1009 20:18:46.832067   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.832075   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:46.832080   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:46.832143   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:46.865003   64287 cri.go:89] found id: ""
	I1009 20:18:46.865030   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.865041   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:46.865051   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:46.865066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:46.916927   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:46.916964   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:46.930547   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:46.930576   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:47.042476   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:47.042501   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:47.042516   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:47.116701   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:47.116732   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:49.659335   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:49.672837   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:49.672906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:49.709722   64287 cri.go:89] found id: ""
	I1009 20:18:49.709750   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.709761   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:49.709769   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:49.709827   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:49.741187   64287 cri.go:89] found id: ""
	I1009 20:18:49.741209   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.741216   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:49.741221   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:49.741278   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:49.782564   64287 cri.go:89] found id: ""
	I1009 20:18:49.782593   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.782603   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:49.782610   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:49.782667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:49.820586   64287 cri.go:89] found id: ""
	I1009 20:18:49.820618   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.820628   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:49.820634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:49.820688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:49.854573   64287 cri.go:89] found id: ""
	I1009 20:18:49.854600   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.854608   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:49.854615   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:49.854672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:49.889947   64287 cri.go:89] found id: ""
	I1009 20:18:49.889976   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.889986   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:49.889993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:49.890049   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:49.925309   64287 cri.go:89] found id: ""
	I1009 20:18:49.925339   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.925350   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:49.925357   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:49.925432   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:49.961993   64287 cri.go:89] found id: ""
	I1009 20:18:49.962019   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.962029   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:49.962039   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:49.962053   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:50.051610   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:50.051642   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:50.092363   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:50.092388   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:50.145606   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:50.145639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:50.160017   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:50.160047   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:50.231984   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:52.733040   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:52.748018   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:52.748075   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:52.789413   64287 cri.go:89] found id: ""
	I1009 20:18:52.789440   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.789452   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:52.789458   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:52.789514   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:52.823188   64287 cri.go:89] found id: ""
	I1009 20:18:52.823219   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.823229   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:52.823237   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:52.823305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:52.858675   64287 cri.go:89] found id: ""
	I1009 20:18:52.858704   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.858716   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:52.858724   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:52.858782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:52.893243   64287 cri.go:89] found id: ""
	I1009 20:18:52.893277   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.893287   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:52.893295   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:52.893363   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:52.928209   64287 cri.go:89] found id: ""
	I1009 20:18:52.928240   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.928248   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:52.928255   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:52.928314   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:52.962418   64287 cri.go:89] found id: ""
	I1009 20:18:52.962446   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.962455   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:52.962461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:52.962510   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:52.996276   64287 cri.go:89] found id: ""
	I1009 20:18:52.996304   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.996315   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:52.996322   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:52.996380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:53.029693   64287 cri.go:89] found id: ""
	I1009 20:18:53.029718   64287 logs.go:282] 0 containers: []
	W1009 20:18:53.029728   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:53.029738   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:53.029752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:53.042690   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:53.042713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:53.114114   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:53.114132   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:53.114143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:53.192280   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:53.192314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:53.230392   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:53.230416   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:55.781562   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:55.795951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:55.796017   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:55.836037   64287 cri.go:89] found id: ""
	I1009 20:18:55.836065   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.836074   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:55.836080   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:55.836126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:55.870534   64287 cri.go:89] found id: ""
	I1009 20:18:55.870564   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.870574   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:55.870580   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:55.870647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:55.906415   64287 cri.go:89] found id: ""
	I1009 20:18:55.906438   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.906447   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:55.906454   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:55.906507   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:55.943387   64287 cri.go:89] found id: ""
	I1009 20:18:55.943414   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.943424   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:55.943431   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:55.943489   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:55.977004   64287 cri.go:89] found id: ""
	I1009 20:18:55.977027   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.977036   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:55.977044   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:55.977120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:56.015608   64287 cri.go:89] found id: ""
	I1009 20:18:56.015634   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.015648   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:56.015654   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:56.015703   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:56.049324   64287 cri.go:89] found id: ""
	I1009 20:18:56.049355   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.049366   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:56.049375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:56.049428   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:56.084914   64287 cri.go:89] found id: ""
	I1009 20:18:56.084937   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.084946   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:56.084955   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:56.084975   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:56.098176   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:56.098197   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:56.178386   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:56.178403   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:56.178414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:56.256547   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:56.256582   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:56.294138   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:56.294170   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:58.851568   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:58.865845   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:58.865902   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:58.904144   64287 cri.go:89] found id: ""
	I1009 20:18:58.904169   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.904177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:58.904194   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:58.904267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:58.936739   64287 cri.go:89] found id: ""
	I1009 20:18:58.936769   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.936780   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:58.936790   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:58.936848   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:58.971592   64287 cri.go:89] found id: ""
	I1009 20:18:58.971623   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.971631   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:58.971638   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:58.971690   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:59.007176   64287 cri.go:89] found id: ""
	I1009 20:18:59.007205   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.007228   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:59.007234   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:59.007283   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:59.041760   64287 cri.go:89] found id: ""
	I1009 20:18:59.041789   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.041800   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:59.041807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:59.041865   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:59.077912   64287 cri.go:89] found id: ""
	I1009 20:18:59.077940   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.077951   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:59.077958   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:59.078014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:59.110669   64287 cri.go:89] found id: ""
	I1009 20:18:59.110701   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.110712   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:59.110720   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:59.110799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:59.144869   64287 cri.go:89] found id: ""
	I1009 20:18:59.144897   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.144907   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:59.144917   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:59.144952   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:59.229014   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:59.229054   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:59.272687   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:59.272725   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:59.328090   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:59.328123   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:59.342264   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:59.342294   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:59.419880   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:01.920869   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:01.933620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:01.933685   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:01.967549   64287 cri.go:89] found id: ""
	I1009 20:19:01.967577   64287 logs.go:282] 0 containers: []
	W1009 20:19:01.967585   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:01.967590   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:01.967675   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:02.005465   64287 cri.go:89] found id: ""
	I1009 20:19:02.005491   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.005500   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:02.005505   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:02.005558   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:02.038140   64287 cri.go:89] found id: ""
	I1009 20:19:02.038162   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.038170   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:02.038176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:02.038219   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:02.070394   64287 cri.go:89] found id: ""
	I1009 20:19:02.070423   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.070434   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:02.070442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:02.070505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:02.110634   64287 cri.go:89] found id: ""
	I1009 20:19:02.110655   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.110663   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:02.110669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:02.110723   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:02.166408   64287 cri.go:89] found id: ""
	I1009 20:19:02.166445   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.166457   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:02.166467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:02.166541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:02.218816   64287 cri.go:89] found id: ""
	I1009 20:19:02.218846   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.218856   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:02.218862   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:02.218914   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:02.265090   64287 cri.go:89] found id: ""
	I1009 20:19:02.265118   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.265130   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:02.265140   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:02.265156   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:02.278134   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:02.278160   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:02.348422   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:02.348453   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:02.348467   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:02.429614   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:02.429651   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:02.469100   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:02.469132   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:05.020914   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:05.034760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:05.034833   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:05.071078   64287 cri.go:89] found id: ""
	I1009 20:19:05.071109   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.071120   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:05.071128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:05.071190   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:05.105517   64287 cri.go:89] found id: ""
	I1009 20:19:05.105545   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.105553   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:05.105558   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:05.105607   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:05.139601   64287 cri.go:89] found id: ""
	I1009 20:19:05.139624   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.139632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:05.139637   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:05.139682   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:05.174329   64287 cri.go:89] found id: ""
	I1009 20:19:05.174351   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.174359   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:05.174365   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:05.174410   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:05.212336   64287 cri.go:89] found id: ""
	I1009 20:19:05.212368   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.212377   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:05.212383   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:05.212464   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:05.251822   64287 cri.go:89] found id: ""
	I1009 20:19:05.251844   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.251851   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:05.251857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:05.251901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:05.291055   64287 cri.go:89] found id: ""
	I1009 20:19:05.291097   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.291106   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:05.291111   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:05.291160   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:05.327223   64287 cri.go:89] found id: ""
	I1009 20:19:05.327248   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.327256   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:05.327266   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:05.327281   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:05.377047   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:05.377086   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:05.391232   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:05.391263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:05.464815   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:05.464837   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:05.464850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:05.542581   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:05.542616   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:08.084504   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:08.100466   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:08.100535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:08.138451   64287 cri.go:89] found id: ""
	I1009 20:19:08.138481   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.138489   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:08.138494   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:08.138551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:08.176839   64287 cri.go:89] found id: ""
	I1009 20:19:08.176867   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.176877   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:08.176884   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:08.176941   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:08.234435   64287 cri.go:89] found id: ""
	I1009 20:19:08.234461   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.234472   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:08.234479   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:08.234544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:08.270727   64287 cri.go:89] found id: ""
	I1009 20:19:08.270753   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.270764   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:08.270771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:08.270831   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:08.305139   64287 cri.go:89] found id: ""
	I1009 20:19:08.305167   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.305177   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:08.305185   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:08.305237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:08.338153   64287 cri.go:89] found id: ""
	I1009 20:19:08.338197   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.338209   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:08.338217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:08.338272   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:08.376046   64287 cri.go:89] found id: ""
	I1009 20:19:08.376073   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.376081   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:08.376087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:08.376144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:08.416555   64287 cri.go:89] found id: ""
	I1009 20:19:08.416595   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.416606   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:08.416617   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:08.416630   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:08.470868   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:08.470898   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:08.486601   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:08.486623   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:08.563325   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:08.563363   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:08.563378   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:08.643743   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:08.643778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:11.197637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:11.210992   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:11.211078   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:11.248309   64287 cri.go:89] found id: ""
	I1009 20:19:11.248331   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.248339   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:11.248345   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:11.248388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:11.282511   64287 cri.go:89] found id: ""
	I1009 20:19:11.282537   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.282546   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:11.282551   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:11.282603   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:11.319447   64287 cri.go:89] found id: ""
	I1009 20:19:11.319473   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.319480   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:11.319486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:11.319543   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:11.353838   64287 cri.go:89] found id: ""
	I1009 20:19:11.353866   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.353879   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:11.353887   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:11.353951   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:11.395257   64287 cri.go:89] found id: ""
	I1009 20:19:11.395288   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.395300   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:11.395309   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:11.395373   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:11.434406   64287 cri.go:89] found id: ""
	I1009 20:19:11.434430   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.434438   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:11.434445   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:11.434506   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:11.468162   64287 cri.go:89] found id: ""
	I1009 20:19:11.468184   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.468192   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:11.468197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:11.468252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:11.500214   64287 cri.go:89] found id: ""
	I1009 20:19:11.500247   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.500257   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:11.500267   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:11.500282   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:11.566430   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:11.566449   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:11.566463   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:11.642784   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:11.642815   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:11.680882   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:11.680908   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:11.731386   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:11.731414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.245696   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:14.258882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:14.258948   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:14.293339   64287 cri.go:89] found id: ""
	I1009 20:19:14.293365   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.293372   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:14.293379   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:14.293424   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:14.327246   64287 cri.go:89] found id: ""
	I1009 20:19:14.327268   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.327275   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:14.327287   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:14.327334   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:14.366384   64287 cri.go:89] found id: ""
	I1009 20:19:14.366412   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.366423   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:14.366430   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:14.366498   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:14.403913   64287 cri.go:89] found id: ""
	I1009 20:19:14.403950   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.403958   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:14.403965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:14.404021   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:14.442655   64287 cri.go:89] found id: ""
	I1009 20:19:14.442684   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.442694   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:14.442702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:14.442749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:14.477895   64287 cri.go:89] found id: ""
	I1009 20:19:14.477921   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.477928   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:14.477934   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:14.477979   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:14.512833   64287 cri.go:89] found id: ""
	I1009 20:19:14.512871   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.512882   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:14.512889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:14.512955   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:14.546557   64287 cri.go:89] found id: ""
	I1009 20:19:14.546582   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.546590   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:14.546597   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:14.546610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:14.599579   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:14.599610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.613347   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:14.613371   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:14.689272   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:14.689295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:14.689306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:14.770362   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:14.770394   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:17.312105   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:17.326851   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:17.326906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:17.364760   64287 cri.go:89] found id: ""
	I1009 20:19:17.364785   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.364793   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:17.364799   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:17.364851   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:17.398149   64287 cri.go:89] found id: ""
	I1009 20:19:17.398172   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.398181   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:17.398189   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:17.398247   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:17.432746   64287 cri.go:89] found id: ""
	I1009 20:19:17.432778   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.432789   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:17.432797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:17.432846   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:17.468095   64287 cri.go:89] found id: ""
	I1009 20:19:17.468125   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.468137   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:17.468145   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:17.468206   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:17.503152   64287 cri.go:89] found id: ""
	I1009 20:19:17.503184   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.503196   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:17.503203   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:17.503257   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:17.543966   64287 cri.go:89] found id: ""
	I1009 20:19:17.543993   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.544002   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:17.544008   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:17.544077   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:17.582780   64287 cri.go:89] found id: ""
	I1009 20:19:17.582801   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.582809   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:17.582814   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:17.582860   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:17.621907   64287 cri.go:89] found id: ""
	I1009 20:19:17.621933   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.621942   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:17.621951   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:17.621963   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:17.674239   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:17.674271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:17.688301   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:17.688331   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:17.759965   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:17.759989   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:17.760005   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:17.836052   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:17.836087   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:20.380237   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:20.393343   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:20.393409   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:20.427462   64287 cri.go:89] found id: ""
	I1009 20:19:20.427491   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.427501   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:20.427509   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:20.427560   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:20.463708   64287 cri.go:89] found id: ""
	I1009 20:19:20.463736   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.463747   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:20.463754   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:20.463818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:20.497898   64287 cri.go:89] found id: ""
	I1009 20:19:20.497924   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.497931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:20.497937   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:20.497985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:20.531880   64287 cri.go:89] found id: ""
	I1009 20:19:20.531910   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.531918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:20.531923   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:20.531971   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:20.565291   64287 cri.go:89] found id: ""
	I1009 20:19:20.565319   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.565330   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:20.565342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:20.565390   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:20.604786   64287 cri.go:89] found id: ""
	I1009 20:19:20.604815   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.604827   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:20.604835   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:20.604891   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:20.646136   64287 cri.go:89] found id: ""
	I1009 20:19:20.646161   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.646169   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:20.646175   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:20.646231   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:20.687503   64287 cri.go:89] found id: ""
	I1009 20:19:20.687527   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.687540   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:20.687548   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:20.687560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:20.738026   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:20.738057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:20.751432   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:20.751459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:20.826192   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:20.826219   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:20.826239   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:20.905874   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:20.905900   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.445277   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:23.460245   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:23.460305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:23.503559   64287 cri.go:89] found id: ""
	I1009 20:19:23.503582   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.503590   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:23.503596   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:23.503652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:23.542748   64287 cri.go:89] found id: ""
	I1009 20:19:23.542783   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.542791   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:23.542797   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:23.542857   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:23.585668   64287 cri.go:89] found id: ""
	I1009 20:19:23.585689   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.585696   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:23.585702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:23.585753   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:23.623863   64287 cri.go:89] found id: ""
	I1009 20:19:23.623884   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.623891   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:23.623897   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:23.623952   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:23.657025   64287 cri.go:89] found id: ""
	I1009 20:19:23.657049   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.657057   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:23.657063   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:23.657120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:23.692536   64287 cri.go:89] found id: ""
	I1009 20:19:23.692573   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.692583   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:23.692590   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:23.692657   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:23.732552   64287 cri.go:89] found id: ""
	I1009 20:19:23.732580   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.732591   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:23.732599   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:23.732645   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:23.767308   64287 cri.go:89] found id: ""
	I1009 20:19:23.767345   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.767356   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:23.767366   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:23.767380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:23.780909   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:23.780948   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:23.853312   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:23.853340   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:23.853355   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:23.934930   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:23.934968   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.977906   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:23.977943   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:26.530146   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:26.545527   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:26.545598   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:26.580942   64287 cri.go:89] found id: ""
	I1009 20:19:26.580970   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.580981   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:26.580988   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:26.581050   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:26.621165   64287 cri.go:89] found id: ""
	I1009 20:19:26.621188   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.621195   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:26.621201   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:26.621245   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:26.655664   64287 cri.go:89] found id: ""
	I1009 20:19:26.655690   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.655697   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:26.655703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:26.655749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:26.691951   64287 cri.go:89] found id: ""
	I1009 20:19:26.691973   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.691981   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:26.691987   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:26.692033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:26.728905   64287 cri.go:89] found id: ""
	I1009 20:19:26.728937   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.728948   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:26.728955   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:26.729013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:26.763673   64287 cri.go:89] found id: ""
	I1009 20:19:26.763697   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.763705   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:26.763711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:26.763765   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:26.798507   64287 cri.go:89] found id: ""
	I1009 20:19:26.798535   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.798547   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:26.798554   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:26.798615   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:26.836114   64287 cri.go:89] found id: ""
	I1009 20:19:26.836140   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.836148   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:26.836156   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:26.836169   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:26.914136   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:26.914160   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:26.914175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:26.995023   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:26.995055   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:27.033788   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:27.033817   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:27.084313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:27.084341   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.597899   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:29.611695   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:29.611756   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:29.646690   64287 cri.go:89] found id: ""
	I1009 20:19:29.646718   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.646726   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:29.646732   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:29.646780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:29.681379   64287 cri.go:89] found id: ""
	I1009 20:19:29.681408   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.681418   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:29.681425   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:29.681481   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:29.717988   64287 cri.go:89] found id: ""
	I1009 20:19:29.718012   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.718020   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:29.718026   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:29.718076   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:29.752783   64287 cri.go:89] found id: ""
	I1009 20:19:29.752815   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.752825   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:29.752833   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:29.752883   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:29.786079   64287 cri.go:89] found id: ""
	I1009 20:19:29.786105   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.786114   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:29.786120   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:29.786167   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:29.820630   64287 cri.go:89] found id: ""
	I1009 20:19:29.820655   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.820663   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:29.820669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:29.820727   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:29.855992   64287 cri.go:89] found id: ""
	I1009 20:19:29.856022   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.856033   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:29.856040   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:29.856096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:29.891196   64287 cri.go:89] found id: ""
	I1009 20:19:29.891224   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.891234   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:29.891244   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:29.891257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:29.945636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:29.945665   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.959715   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:29.959741   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:30.034023   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:30.034046   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:30.034066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:30.109512   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:30.109545   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.651252   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:32.665196   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:32.665253   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:32.701468   64287 cri.go:89] found id: ""
	I1009 20:19:32.701497   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.701516   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:32.701525   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:32.701581   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:32.740585   64287 cri.go:89] found id: ""
	I1009 20:19:32.740611   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.740623   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:32.740629   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:32.740699   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:32.773765   64287 cri.go:89] found id: ""
	I1009 20:19:32.773792   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.773803   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:32.773810   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:32.773869   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:32.812647   64287 cri.go:89] found id: ""
	I1009 20:19:32.812680   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.812695   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:32.812702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:32.812752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:32.847044   64287 cri.go:89] found id: ""
	I1009 20:19:32.847092   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.847101   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:32.847107   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:32.847153   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:32.885410   64287 cri.go:89] found id: ""
	I1009 20:19:32.885439   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.885448   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:32.885455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:32.885515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:32.922917   64287 cri.go:89] found id: ""
	I1009 20:19:32.922944   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.922955   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:32.922963   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:32.923026   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:32.958993   64287 cri.go:89] found id: ""
	I1009 20:19:32.959019   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.959027   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:32.959037   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:32.959052   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.996844   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:32.996871   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:33.047684   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:33.047715   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:33.061829   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:33.061856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:33.135278   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:33.135302   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:33.135314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:35.722479   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:35.736670   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:35.736745   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:35.778594   64287 cri.go:89] found id: ""
	I1009 20:19:35.778617   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.778625   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:35.778630   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:35.778677   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:35.810906   64287 cri.go:89] found id: ""
	I1009 20:19:35.810934   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.810945   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:35.810954   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:35.811014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:35.846226   64287 cri.go:89] found id: ""
	I1009 20:19:35.846258   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.846269   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:35.846277   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:35.846325   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:35.880509   64287 cri.go:89] found id: ""
	I1009 20:19:35.880536   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.880547   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:35.880555   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:35.880613   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:35.916039   64287 cri.go:89] found id: ""
	I1009 20:19:35.916067   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.916077   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:35.916085   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:35.916142   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:35.948068   64287 cri.go:89] found id: ""
	I1009 20:19:35.948099   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.948107   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:35.948113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:35.948168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:35.982531   64287 cri.go:89] found id: ""
	I1009 20:19:35.982556   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.982565   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:35.982571   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:35.982618   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:36.016284   64287 cri.go:89] found id: ""
	I1009 20:19:36.016307   64287 logs.go:282] 0 containers: []
	W1009 20:19:36.016314   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:36.016324   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:36.016333   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:36.096773   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:36.096807   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:36.135382   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:36.135408   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:36.189157   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:36.189189   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:36.202243   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:36.202272   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:36.289968   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:38.790894   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:38.804960   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:38.805020   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:38.840867   64287 cri.go:89] found id: ""
	I1009 20:19:38.840891   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.840898   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:38.840904   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:38.840961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:38.877659   64287 cri.go:89] found id: ""
	I1009 20:19:38.877686   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.877695   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:38.877709   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:38.877768   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:38.917914   64287 cri.go:89] found id: ""
	I1009 20:19:38.917938   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.917947   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:38.917954   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:38.918011   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:38.955879   64287 cri.go:89] found id: ""
	I1009 20:19:38.955907   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.955918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:38.955925   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:38.955985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:38.991683   64287 cri.go:89] found id: ""
	I1009 20:19:38.991712   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.991723   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:38.991730   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:38.991815   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:39.026167   64287 cri.go:89] found id: ""
	I1009 20:19:39.026192   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.026199   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:39.026205   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:39.026273   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:39.061646   64287 cri.go:89] found id: ""
	I1009 20:19:39.061676   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.061692   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:39.061699   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:39.061760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:39.097660   64287 cri.go:89] found id: ""
	I1009 20:19:39.097687   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.097696   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:39.097706   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:39.097720   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:39.149199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:39.149232   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:39.162366   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:39.162391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:39.237267   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:39.237295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:39.237310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:39.320531   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:39.320566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:41.865807   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:41.880948   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:41.881015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:41.917675   64287 cri.go:89] found id: ""
	I1009 20:19:41.917703   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.917714   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:41.917722   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:41.917780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:41.957152   64287 cri.go:89] found id: ""
	I1009 20:19:41.957180   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.957189   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:41.957194   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:41.957250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:42.008129   64287 cri.go:89] found id: ""
	I1009 20:19:42.008153   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.008162   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:42.008170   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:42.008232   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:42.042628   64287 cri.go:89] found id: ""
	I1009 20:19:42.042651   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.042658   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:42.042669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:42.042712   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:42.080123   64287 cri.go:89] found id: ""
	I1009 20:19:42.080147   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.080155   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:42.080161   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:42.080214   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:42.120070   64287 cri.go:89] found id: ""
	I1009 20:19:42.120099   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.120108   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:42.120114   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:42.120161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:42.153686   64287 cri.go:89] found id: ""
	I1009 20:19:42.153717   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.153727   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:42.153735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:42.153805   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:42.187793   64287 cri.go:89] found id: ""
	I1009 20:19:42.187820   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.187832   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:42.187842   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:42.187856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:42.267510   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:42.267545   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:42.267559   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:42.348061   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:42.348095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:42.393407   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:42.393431   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:42.448547   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:42.448580   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:44.963603   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:44.977341   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:44.977417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:45.018729   64287 cri.go:89] found id: ""
	I1009 20:19:45.018756   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.018764   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:45.018770   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:45.018821   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:45.055232   64287 cri.go:89] found id: ""
	I1009 20:19:45.055259   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.055267   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:45.055273   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:45.055332   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:45.090575   64287 cri.go:89] found id: ""
	I1009 20:19:45.090604   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.090614   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:45.090620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:45.090692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:45.126426   64287 cri.go:89] found id: ""
	I1009 20:19:45.126452   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.126459   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:45.126465   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:45.126523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:45.166192   64287 cri.go:89] found id: ""
	I1009 20:19:45.166223   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.166232   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:45.166239   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:45.166301   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:45.200353   64287 cri.go:89] found id: ""
	I1009 20:19:45.200384   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.200400   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:45.200406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:45.200454   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:45.235696   64287 cri.go:89] found id: ""
	I1009 20:19:45.235729   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.235740   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:45.235747   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:45.235807   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:45.271937   64287 cri.go:89] found id: ""
	I1009 20:19:45.271969   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.271979   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:45.271990   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:45.272004   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:45.347600   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:45.347635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:45.392203   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:45.392229   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:45.444012   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:45.444045   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:45.458106   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:45.458130   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:45.540275   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.041410   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:48.057834   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:48.057889   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:48.094318   64287 cri.go:89] found id: ""
	I1009 20:19:48.094346   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.094355   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:48.094362   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:48.094406   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:48.129645   64287 cri.go:89] found id: ""
	I1009 20:19:48.129672   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.129683   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:48.129691   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:48.129743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:48.164423   64287 cri.go:89] found id: ""
	I1009 20:19:48.164446   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.164454   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:48.164460   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:48.164519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:48.197708   64287 cri.go:89] found id: ""
	I1009 20:19:48.197736   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.197745   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:48.197750   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:48.197796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:48.235885   64287 cri.go:89] found id: ""
	I1009 20:19:48.235913   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.235925   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:48.235931   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:48.235995   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:48.272458   64287 cri.go:89] found id: ""
	I1009 20:19:48.272492   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.272504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:48.272513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:48.272580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:48.307152   64287 cri.go:89] found id: ""
	I1009 20:19:48.307180   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.307190   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:48.307197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:48.307255   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:48.347335   64287 cri.go:89] found id: ""
	I1009 20:19:48.347366   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.347376   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:48.347387   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:48.347401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:48.418125   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:48.418161   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:48.433361   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:48.433386   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:48.524863   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.524879   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:48.524890   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:48.612196   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:48.612247   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:51.149683   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:51.164603   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:51.164663   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:51.197120   64287 cri.go:89] found id: ""
	I1009 20:19:51.197151   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.197162   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:51.197170   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:51.197228   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:51.233612   64287 cri.go:89] found id: ""
	I1009 20:19:51.233641   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.233651   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:51.233660   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:51.233726   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:51.267119   64287 cri.go:89] found id: ""
	I1009 20:19:51.267150   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.267159   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:51.267168   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:51.267233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:51.301816   64287 cri.go:89] found id: ""
	I1009 20:19:51.301845   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.301854   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:51.301859   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:51.301917   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:51.335483   64287 cri.go:89] found id: ""
	I1009 20:19:51.335524   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.335535   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:51.335543   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:51.335604   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:51.370207   64287 cri.go:89] found id: ""
	I1009 20:19:51.370241   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.370252   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:51.370258   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:51.370320   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:51.406925   64287 cri.go:89] found id: ""
	I1009 20:19:51.406949   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.406956   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:51.406962   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:51.407015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:51.446354   64287 cri.go:89] found id: ""
	I1009 20:19:51.446378   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.446386   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:51.446394   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:51.446405   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:51.496627   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:51.496657   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:51.509587   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:51.509610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:51.583276   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:51.583295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:51.583306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:51.661552   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:51.661584   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:54.202782   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:54.227761   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:54.227829   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:54.261338   64287 cri.go:89] found id: ""
	I1009 20:19:54.261366   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.261374   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:54.261381   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:54.261435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:54.300387   64287 cri.go:89] found id: ""
	I1009 20:19:54.300414   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.300424   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:54.300429   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:54.300485   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:54.339083   64287 cri.go:89] found id: ""
	I1009 20:19:54.339110   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.339122   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:54.339129   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:54.339180   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:54.374145   64287 cri.go:89] found id: ""
	I1009 20:19:54.374174   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.374182   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:54.374188   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:54.374240   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:54.411872   64287 cri.go:89] found id: ""
	I1009 20:19:54.411904   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.411918   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:54.411926   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:54.411992   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:54.449459   64287 cri.go:89] found id: ""
	I1009 20:19:54.449493   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.449504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:54.449512   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:54.449575   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:54.482728   64287 cri.go:89] found id: ""
	I1009 20:19:54.482752   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.482762   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:54.482770   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:54.482830   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:54.516220   64287 cri.go:89] found id: ""
	I1009 20:19:54.516252   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.516261   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:54.516270   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:54.516280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:54.569531   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:54.569560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:54.583371   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:54.583395   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:54.651718   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:54.651742   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:54.651757   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:54.728869   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:54.728903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.270702   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:57.284287   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:57.284351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:57.317235   64287 cri.go:89] found id: ""
	I1009 20:19:57.317269   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.317279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:57.317290   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:57.317349   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:57.350030   64287 cri.go:89] found id: ""
	I1009 20:19:57.350058   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.350066   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:57.350071   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:57.350118   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:57.382840   64287 cri.go:89] found id: ""
	I1009 20:19:57.382867   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.382877   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:57.382884   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:57.382935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:57.417193   64287 cri.go:89] found id: ""
	I1009 20:19:57.417229   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.417239   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:57.417247   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:57.417309   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:57.456417   64287 cri.go:89] found id: ""
	I1009 20:19:57.456445   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.456454   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:57.456461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:57.456523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:57.490156   64287 cri.go:89] found id: ""
	I1009 20:19:57.490185   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.490193   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:57.490199   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:57.490246   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:57.523983   64287 cri.go:89] found id: ""
	I1009 20:19:57.524013   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.524023   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:57.524030   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:57.524093   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:57.562288   64287 cri.go:89] found id: ""
	I1009 20:19:57.562317   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.562325   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:57.562334   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:57.562345   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.602475   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:57.602502   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:57.656636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:57.656668   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:57.670738   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:57.670765   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:57.742943   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:57.742968   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:57.742979   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:00.321926   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:00.335475   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:00.335546   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:00.369727   64287 cri.go:89] found id: ""
	I1009 20:20:00.369762   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.369770   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:00.369776   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:00.369823   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:00.408917   64287 cri.go:89] found id: ""
	I1009 20:20:00.408943   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.408953   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:00.408964   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:00.409013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:00.447646   64287 cri.go:89] found id: ""
	I1009 20:20:00.447676   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.447687   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:00.447694   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:00.447754   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:00.485752   64287 cri.go:89] found id: ""
	I1009 20:20:00.485780   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.485790   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:00.485797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:00.485859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:00.519568   64287 cri.go:89] found id: ""
	I1009 20:20:00.519592   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.519600   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:00.519606   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:00.519667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:00.553288   64287 cri.go:89] found id: ""
	I1009 20:20:00.553323   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.553334   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:00.553342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:00.553402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:00.593842   64287 cri.go:89] found id: ""
	I1009 20:20:00.593868   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.593875   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:00.593882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:00.593938   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:00.630808   64287 cri.go:89] found id: ""
	I1009 20:20:00.630839   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.630849   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:00.630859   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:00.630873   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:00.681858   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:00.681888   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:00.695365   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:00.695391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:00.768651   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:00.768679   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:00.768693   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:00.843999   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:00.844034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.390483   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:03.405406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:03.405476   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:03.440025   64287 cri.go:89] found id: ""
	I1009 20:20:03.440048   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.440055   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:03.440061   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:03.440113   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:03.475407   64287 cri.go:89] found id: ""
	I1009 20:20:03.475440   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.475450   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:03.475456   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:03.475511   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:03.512656   64287 cri.go:89] found id: ""
	I1009 20:20:03.512680   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.512688   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:03.512693   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:03.512749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:03.549174   64287 cri.go:89] found id: ""
	I1009 20:20:03.549204   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.549212   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:03.549217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:03.549282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:03.586093   64287 cri.go:89] found id: ""
	I1009 20:20:03.586118   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.586128   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:03.586135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:03.586201   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:03.624221   64287 cri.go:89] found id: ""
	I1009 20:20:03.624248   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.624258   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:03.624271   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:03.624342   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:03.658759   64287 cri.go:89] found id: ""
	I1009 20:20:03.658781   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.658789   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:03.658794   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:03.658850   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:03.692200   64287 cri.go:89] found id: ""
	I1009 20:20:03.692227   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.692237   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:03.692247   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:03.692263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:03.745949   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:03.745985   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:03.759691   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:03.759724   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:03.833000   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:03.833020   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:03.833034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:03.911321   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:03.911352   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:06.451158   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:06.466356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:06.466435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:06.502907   64287 cri.go:89] found id: ""
	I1009 20:20:06.502936   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.502944   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:06.502950   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:06.503000   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:06.540938   64287 cri.go:89] found id: ""
	I1009 20:20:06.540961   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.540969   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:06.540974   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:06.541033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:06.575587   64287 cri.go:89] found id: ""
	I1009 20:20:06.575616   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.575632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:06.575640   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:06.575696   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:06.611052   64287 cri.go:89] found id: ""
	I1009 20:20:06.611093   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.611103   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:06.611110   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:06.611170   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:06.647763   64287 cri.go:89] found id: ""
	I1009 20:20:06.647793   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.647804   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:06.647811   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:06.647876   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:06.682423   64287 cri.go:89] found id: ""
	I1009 20:20:06.682449   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.682460   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:06.682471   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:06.682541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:06.718096   64287 cri.go:89] found id: ""
	I1009 20:20:06.718124   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.718135   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:06.718141   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:06.718200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:06.753320   64287 cri.go:89] found id: ""
	I1009 20:20:06.753344   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.753353   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:06.753361   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:06.753375   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:06.809610   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:06.809640   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:06.823651   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:06.823680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:06.895796   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:06.895819   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:06.895833   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:06.972602   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:06.972635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:09.513909   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:09.527143   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:09.527254   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:09.560406   64287 cri.go:89] found id: ""
	I1009 20:20:09.560432   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.560440   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:09.560445   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:09.560493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:09.600180   64287 cri.go:89] found id: ""
	I1009 20:20:09.600202   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.600219   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:09.600225   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:09.600285   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:09.638356   64287 cri.go:89] found id: ""
	I1009 20:20:09.638383   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.638393   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:09.638398   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:09.638450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:09.680589   64287 cri.go:89] found id: ""
	I1009 20:20:09.680616   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.680627   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:09.680635   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:09.680686   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:09.719018   64287 cri.go:89] found id: ""
	I1009 20:20:09.719041   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.719049   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:09.719054   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:09.719129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:09.757262   64287 cri.go:89] found id: ""
	I1009 20:20:09.757290   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.757298   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:09.757305   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:09.757364   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:09.796127   64287 cri.go:89] found id: ""
	I1009 20:20:09.796157   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.796168   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:09.796176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:09.796236   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:09.830650   64287 cri.go:89] found id: ""
	I1009 20:20:09.830679   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.830689   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:09.830699   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:09.830713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:09.882638   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:09.882666   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:09.897458   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:09.897488   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:09.964440   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:09.964462   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:09.964473   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:10.040103   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:10.040138   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.590159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:12.603380   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:12.603448   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:12.636246   64287 cri.go:89] found id: ""
	I1009 20:20:12.636272   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.636281   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:12.636288   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:12.636392   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:12.669400   64287 cri.go:89] found id: ""
	I1009 20:20:12.669429   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.669439   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:12.669446   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:12.669493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:12.705076   64287 cri.go:89] found id: ""
	I1009 20:20:12.705104   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.705114   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:12.705122   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:12.705198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:12.738883   64287 cri.go:89] found id: ""
	I1009 20:20:12.738914   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.738926   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:12.738933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:12.738988   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:12.773549   64287 cri.go:89] found id: ""
	I1009 20:20:12.773572   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.773580   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:12.773592   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:12.773709   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:12.813123   64287 cri.go:89] found id: ""
	I1009 20:20:12.813148   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.813156   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:12.813162   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:12.813215   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:12.851272   64287 cri.go:89] found id: ""
	I1009 20:20:12.851305   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.851317   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:12.851325   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:12.851389   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:12.891399   64287 cri.go:89] found id: ""
	I1009 20:20:12.891422   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.891429   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:12.891436   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:12.891455   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:12.945839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:12.945868   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:12.959711   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:12.959735   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:13.028015   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:13.028034   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:13.028048   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:13.108451   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:13.108491   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:15.651166   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:15.664618   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:15.664692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:15.697088   64287 cri.go:89] found id: ""
	I1009 20:20:15.697117   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.697127   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:15.697137   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:15.697198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:15.738641   64287 cri.go:89] found id: ""
	I1009 20:20:15.738671   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.738682   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:15.738690   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:15.738747   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:15.771293   64287 cri.go:89] found id: ""
	I1009 20:20:15.771318   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.771326   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:15.771332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:15.771391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:15.804234   64287 cri.go:89] found id: ""
	I1009 20:20:15.804263   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.804271   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:15.804279   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:15.804329   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:15.840914   64287 cri.go:89] found id: ""
	I1009 20:20:15.840964   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.840975   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:15.840983   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:15.841041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:15.878243   64287 cri.go:89] found id: ""
	I1009 20:20:15.878270   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.878280   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:15.878288   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:15.878344   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:15.917371   64287 cri.go:89] found id: ""
	I1009 20:20:15.917398   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.917409   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:15.917416   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:15.917473   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:15.951443   64287 cri.go:89] found id: ""
	I1009 20:20:15.951470   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.951481   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:15.951490   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:15.951504   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:16.017601   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:16.017629   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:16.017643   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:16.095915   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:16.095946   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:16.141704   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:16.141737   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:16.197391   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:16.197424   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:18.712278   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:18.725451   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:18.725513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:18.757618   64287 cri.go:89] found id: ""
	I1009 20:20:18.757640   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.757650   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:18.757657   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:18.757715   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:18.791651   64287 cri.go:89] found id: ""
	I1009 20:20:18.791677   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.791686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:18.791693   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:18.791750   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:18.826402   64287 cri.go:89] found id: ""
	I1009 20:20:18.826430   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.826440   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:18.826449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:18.826522   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:18.868610   64287 cri.go:89] found id: ""
	I1009 20:20:18.868634   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.868644   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:18.868652   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:18.868710   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:18.905499   64287 cri.go:89] found id: ""
	I1009 20:20:18.905520   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.905527   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:18.905532   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:18.905588   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:18.938772   64287 cri.go:89] found id: ""
	I1009 20:20:18.938795   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.938803   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:18.938809   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:18.938855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:18.974712   64287 cri.go:89] found id: ""
	I1009 20:20:18.974742   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.974753   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:18.974760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:18.974820   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:19.008681   64287 cri.go:89] found id: ""
	I1009 20:20:19.008710   64287 logs.go:282] 0 containers: []
	W1009 20:20:19.008718   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:19.008726   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:19.008736   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:19.059862   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:19.059891   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:19.073071   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:19.073096   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:19.142163   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:19.142189   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:19.142204   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:19.226645   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:19.226691   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:21.767167   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:21.780448   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:21.780530   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:21.813670   64287 cri.go:89] found id: ""
	I1009 20:20:21.813699   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.813708   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:21.813714   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:21.813760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:21.850793   64287 cri.go:89] found id: ""
	I1009 20:20:21.850826   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.850838   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:21.850845   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:21.850904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:21.887886   64287 cri.go:89] found id: ""
	I1009 20:20:21.887919   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.887931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:21.887938   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:21.887987   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:21.926620   64287 cri.go:89] found id: ""
	I1009 20:20:21.926651   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.926661   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:21.926669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:21.926734   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:21.962822   64287 cri.go:89] found id: ""
	I1009 20:20:21.962859   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.962867   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:21.962872   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:21.962932   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:22.001043   64287 cri.go:89] found id: ""
	I1009 20:20:22.001070   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.001080   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:22.001088   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:22.001145   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:22.034111   64287 cri.go:89] found id: ""
	I1009 20:20:22.034139   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.034148   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:22.034153   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:22.034200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:22.067601   64287 cri.go:89] found id: ""
	I1009 20:20:22.067629   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.067640   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:22.067649   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:22.067663   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:22.081545   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:22.081575   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:22.158725   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:22.158749   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:22.158761   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:22.249086   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:22.249133   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:22.287435   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:22.287462   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:24.838935   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:24.852057   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:24.852126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:24.887454   64287 cri.go:89] found id: ""
	I1009 20:20:24.887488   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.887500   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:24.887507   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:24.887565   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:24.928273   64287 cri.go:89] found id: ""
	I1009 20:20:24.928295   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.928303   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:24.928309   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:24.928367   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:24.962116   64287 cri.go:89] found id: ""
	I1009 20:20:24.962152   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.962164   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:24.962172   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:24.962252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:24.996909   64287 cri.go:89] found id: ""
	I1009 20:20:24.996934   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.996942   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:24.996947   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:24.996996   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:25.030615   64287 cri.go:89] found id: ""
	I1009 20:20:25.030647   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.030658   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:25.030665   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:25.030725   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:25.066069   64287 cri.go:89] found id: ""
	I1009 20:20:25.066096   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.066104   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:25.066109   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:25.066158   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:25.101762   64287 cri.go:89] found id: ""
	I1009 20:20:25.101791   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.101799   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:25.101807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:25.101854   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:25.139704   64287 cri.go:89] found id: ""
	I1009 20:20:25.139730   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.139738   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:25.139745   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:25.139756   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:25.190212   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:25.190257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:25.206181   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:25.206206   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:25.276523   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:25.276548   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:25.276562   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:25.352477   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:25.352509   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:27.894112   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:27.907965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:27.908018   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:27.942933   64287 cri.go:89] found id: ""
	I1009 20:20:27.942959   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.942967   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:27.942973   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:27.943029   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:27.995890   64287 cri.go:89] found id: ""
	I1009 20:20:27.995917   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.995929   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:27.995936   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:27.995985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:28.031877   64287 cri.go:89] found id: ""
	I1009 20:20:28.031904   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.031914   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:28.031922   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:28.031975   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:28.073691   64287 cri.go:89] found id: ""
	I1009 20:20:28.073720   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.073730   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:28.073738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:28.073796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:28.109946   64287 cri.go:89] found id: ""
	I1009 20:20:28.109975   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.109987   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:28.109995   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:28.110041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:28.144771   64287 cri.go:89] found id: ""
	I1009 20:20:28.144801   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.144822   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:28.144830   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:28.144892   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:28.179617   64287 cri.go:89] found id: ""
	I1009 20:20:28.179640   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.179647   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:28.179653   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:28.179698   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:28.213734   64287 cri.go:89] found id: ""
	I1009 20:20:28.213759   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.213767   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:28.213775   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:28.213787   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:28.227778   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:28.227803   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:28.298025   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:28.298057   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:28.298071   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:28.378664   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:28.378700   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:28.417577   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:28.417602   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:30.968360   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:30.981229   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:30.981295   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:31.013373   64287 cri.go:89] found id: ""
	I1009 20:20:31.013397   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.013408   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:31.013415   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:31.013468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:31.044387   64287 cri.go:89] found id: ""
	I1009 20:20:31.044408   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.044416   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:31.044421   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:31.044490   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:31.079677   64287 cri.go:89] found id: ""
	I1009 20:20:31.079702   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.079718   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:31.079727   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:31.079788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:31.118895   64287 cri.go:89] found id: ""
	I1009 20:20:31.118921   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.118933   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:31.118940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:31.118997   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:31.157008   64287 cri.go:89] found id: ""
	I1009 20:20:31.157035   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.157043   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:31.157049   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:31.157096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:31.188999   64287 cri.go:89] found id: ""
	I1009 20:20:31.189024   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.189032   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:31.189038   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:31.189095   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:31.225314   64287 cri.go:89] found id: ""
	I1009 20:20:31.225341   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.225351   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:31.225359   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:31.225426   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:31.259864   64287 cri.go:89] found id: ""
	I1009 20:20:31.259891   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.259899   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:31.259907   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:31.259918   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:31.333579   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:31.333615   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:31.375852   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:31.375884   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:31.428346   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:31.428377   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:31.442927   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:31.442951   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:31.512924   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:34.013346   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:34.026671   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:34.026729   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:34.062445   64287 cri.go:89] found id: ""
	I1009 20:20:34.062469   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.062479   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:34.062487   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:34.062586   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:34.096670   64287 cri.go:89] found id: ""
	I1009 20:20:34.096692   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.096699   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:34.096705   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:34.096752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:34.133653   64287 cri.go:89] found id: ""
	I1009 20:20:34.133682   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.133702   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:34.133711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:34.133770   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:34.167514   64287 cri.go:89] found id: ""
	I1009 20:20:34.167541   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.167552   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:34.167560   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:34.167631   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:34.200397   64287 cri.go:89] found id: ""
	I1009 20:20:34.200427   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.200438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:34.200446   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:34.200504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:34.236507   64287 cri.go:89] found id: ""
	I1009 20:20:34.236534   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.236544   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:34.236551   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:34.236611   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:34.272611   64287 cri.go:89] found id: ""
	I1009 20:20:34.272639   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.272650   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:34.272658   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:34.272733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:34.311392   64287 cri.go:89] found id: ""
	I1009 20:20:34.311417   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.311426   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:34.311434   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:34.311445   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:34.401718   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:34.401751   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:34.463768   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:34.463798   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:34.526313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:34.526347   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:34.540370   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:34.540401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:34.610697   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:37.111821   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:37.125012   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:37.125073   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:37.165105   64287 cri.go:89] found id: ""
	I1009 20:20:37.165135   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.165144   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:37.165151   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:37.165217   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:37.201367   64287 cri.go:89] found id: ""
	I1009 20:20:37.201393   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.201403   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:37.201412   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:37.201470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:37.234258   64287 cri.go:89] found id: ""
	I1009 20:20:37.234283   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.234291   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:37.234297   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:37.234351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:37.270765   64287 cri.go:89] found id: ""
	I1009 20:20:37.270790   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.270798   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:37.270803   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:37.270855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:37.303931   64287 cri.go:89] found id: ""
	I1009 20:20:37.303962   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.303970   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:37.303976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:37.304035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:37.339438   64287 cri.go:89] found id: ""
	I1009 20:20:37.339466   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.339476   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:37.339484   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:37.339544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:37.371538   64287 cri.go:89] found id: ""
	I1009 20:20:37.371565   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.371576   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:37.371584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:37.371644   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:37.414729   64287 cri.go:89] found id: ""
	I1009 20:20:37.414775   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.414785   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:37.414803   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:37.414818   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:37.453989   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:37.454013   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:37.504516   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:37.504551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:37.520317   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:37.520353   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:37.590144   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:37.590163   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:37.590175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:40.167604   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:40.191718   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:40.191788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:40.247439   64287 cri.go:89] found id: ""
	I1009 20:20:40.247467   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.247475   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:40.247482   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:40.247549   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:40.284012   64287 cri.go:89] found id: ""
	I1009 20:20:40.284043   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.284055   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:40.284063   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:40.284124   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:40.321347   64287 cri.go:89] found id: ""
	I1009 20:20:40.321378   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.321386   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:40.321391   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:40.321456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:40.364063   64287 cri.go:89] found id: ""
	I1009 20:20:40.364084   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.364092   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:40.364098   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:40.364152   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:40.400423   64287 cri.go:89] found id: ""
	I1009 20:20:40.400449   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.400458   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:40.400467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:40.400525   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:40.434538   64287 cri.go:89] found id: ""
	I1009 20:20:40.434567   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.434576   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:40.434584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:40.434647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:40.468860   64287 cri.go:89] found id: ""
	I1009 20:20:40.468909   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.468921   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:40.468928   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:40.468990   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:40.501583   64287 cri.go:89] found id: ""
	I1009 20:20:40.501607   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.501615   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:40.501624   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:40.501639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:40.558878   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:40.558919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:40.573191   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:40.573218   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:40.640959   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:40.640980   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:40.640996   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:40.716475   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:40.716510   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.255685   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:43.269113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:43.269182   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:43.305892   64287 cri.go:89] found id: ""
	I1009 20:20:43.305920   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.305931   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:43.305939   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:43.305999   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:43.341486   64287 cri.go:89] found id: ""
	I1009 20:20:43.341515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.341525   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:43.341532   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:43.341592   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:43.375473   64287 cri.go:89] found id: ""
	I1009 20:20:43.375496   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.375506   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:43.375513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:43.375577   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:43.411235   64287 cri.go:89] found id: ""
	I1009 20:20:43.411259   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.411268   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:43.411274   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:43.411330   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:43.444884   64287 cri.go:89] found id: ""
	I1009 20:20:43.444914   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.444926   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:43.444933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:43.444993   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:43.479151   64287 cri.go:89] found id: ""
	I1009 20:20:43.479177   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.479187   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:43.479195   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:43.479261   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:43.512485   64287 cri.go:89] found id: ""
	I1009 20:20:43.512515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.512523   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:43.512530   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:43.512580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:43.546511   64287 cri.go:89] found id: ""
	I1009 20:20:43.546533   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.546541   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:43.546549   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:43.546561   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:43.623938   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:43.623970   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.667655   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:43.667680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:43.724747   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:43.724778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:43.740060   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:43.740081   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:43.820910   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:46.321796   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:46.337028   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:46.337086   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:46.374564   64287 cri.go:89] found id: ""
	I1009 20:20:46.374587   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.374595   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:46.374601   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:46.374662   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:46.411418   64287 cri.go:89] found id: ""
	I1009 20:20:46.411453   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.411470   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:46.411477   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:46.411535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:46.447726   64287 cri.go:89] found id: ""
	I1009 20:20:46.447750   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.447758   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:46.447763   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:46.447818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:46.484691   64287 cri.go:89] found id: ""
	I1009 20:20:46.484721   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.484731   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:46.484738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:46.484799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:46.525017   64287 cri.go:89] found id: ""
	I1009 20:20:46.525052   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.525064   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:46.525071   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:46.525129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:46.562306   64287 cri.go:89] found id: ""
	I1009 20:20:46.562334   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.562342   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:46.562350   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:46.562417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:46.598067   64287 cri.go:89] found id: ""
	I1009 20:20:46.598099   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.598110   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:46.598117   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:46.598179   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:46.639484   64287 cri.go:89] found id: ""
	I1009 20:20:46.639515   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.639526   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:46.639537   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:46.639551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:46.694106   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:46.694140   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:46.709475   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:46.709501   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:46.781281   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:46.781308   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:46.781322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:46.862224   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:46.862262   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:49.402786   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:49.417432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:49.417537   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:49.454253   64287 cri.go:89] found id: ""
	I1009 20:20:49.454286   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.454296   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:49.454305   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:49.454366   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:49.490198   64287 cri.go:89] found id: ""
	I1009 20:20:49.490223   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.490234   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:49.490241   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:49.490307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:49.524286   64287 cri.go:89] found id: ""
	I1009 20:20:49.524312   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.524322   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:49.524330   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:49.524388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:49.566415   64287 cri.go:89] found id: ""
	I1009 20:20:49.566444   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.566455   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:49.566462   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:49.566529   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:49.604306   64287 cri.go:89] found id: ""
	I1009 20:20:49.604335   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.604346   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:49.604353   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:49.604414   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:49.638514   64287 cri.go:89] found id: ""
	I1009 20:20:49.638543   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.638560   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:49.638568   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:49.638630   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:49.672158   64287 cri.go:89] found id: ""
	I1009 20:20:49.672182   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.672191   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:49.672197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:49.672250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:49.709865   64287 cri.go:89] found id: ""
	I1009 20:20:49.709887   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.709897   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:49.709907   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:49.709919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:49.762184   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:49.762220   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:49.775852   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:49.775880   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:49.850309   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:49.850329   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:49.850343   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:49.930225   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:49.930266   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:52.470580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:52.484087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:52.484141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:52.517440   64287 cri.go:89] found id: ""
	I1009 20:20:52.517461   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.517469   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:52.517475   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:52.517519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:52.550340   64287 cri.go:89] found id: ""
	I1009 20:20:52.550380   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.550392   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:52.550399   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:52.550468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:52.586444   64287 cri.go:89] found id: ""
	I1009 20:20:52.586478   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.586488   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:52.586495   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:52.586551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:52.620461   64287 cri.go:89] found id: ""
	I1009 20:20:52.620488   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.620499   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:52.620506   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:52.620566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:52.656032   64287 cri.go:89] found id: ""
	I1009 20:20:52.656063   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.656074   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:52.656082   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:52.656144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:52.687083   64287 cri.go:89] found id: ""
	I1009 20:20:52.687110   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.687118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:52.687124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:52.687187   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:52.723413   64287 cri.go:89] found id: ""
	I1009 20:20:52.723442   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.723453   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:52.723461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:52.723521   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:52.754656   64287 cri.go:89] found id: ""
	I1009 20:20:52.754687   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.754698   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:52.754709   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:52.754721   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:52.807359   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:52.807398   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:52.821469   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:52.821500   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:52.893447   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:52.893470   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:52.893484   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:52.970051   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:52.970083   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:55.508078   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:55.521951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:55.522012   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:55.556291   64287 cri.go:89] found id: ""
	I1009 20:20:55.556316   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.556324   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:55.556329   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:55.556380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:55.591032   64287 cri.go:89] found id: ""
	I1009 20:20:55.591059   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.591079   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:55.591086   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:55.591141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:55.636196   64287 cri.go:89] found id: ""
	I1009 20:20:55.636228   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.636239   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:55.636246   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:55.636310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:55.673291   64287 cri.go:89] found id: ""
	I1009 20:20:55.673313   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.673321   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:55.673327   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:55.673374   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:55.709457   64287 cri.go:89] found id: ""
	I1009 20:20:55.709486   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.709497   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:55.709504   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:55.709563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:55.748391   64287 cri.go:89] found id: ""
	I1009 20:20:55.748423   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.748434   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:55.748442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:55.748503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:55.780581   64287 cri.go:89] found id: ""
	I1009 20:20:55.780610   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.780620   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:55.780627   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:55.780688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:55.816489   64287 cri.go:89] found id: ""
	I1009 20:20:55.816527   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.816535   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:55.816554   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:55.816568   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:55.871679   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:55.871708   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:55.887895   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:55.887920   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:55.956814   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:55.956838   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:55.956850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:56.031453   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:56.031489   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.569098   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:58.583558   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:58.583626   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:58.622296   64287 cri.go:89] found id: ""
	I1009 20:20:58.622326   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.622334   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:58.622340   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:58.622401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:58.663776   64287 cri.go:89] found id: ""
	I1009 20:20:58.663798   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.663806   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:58.663812   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:58.663858   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:58.699968   64287 cri.go:89] found id: ""
	I1009 20:20:58.699994   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.700002   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:58.700007   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:58.700066   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:58.733935   64287 cri.go:89] found id: ""
	I1009 20:20:58.733959   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.733968   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:58.733974   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:58.734030   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:58.768723   64287 cri.go:89] found id: ""
	I1009 20:20:58.768752   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.768763   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:58.768771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:58.768834   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:58.803129   64287 cri.go:89] found id: ""
	I1009 20:20:58.803153   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.803161   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:58.803166   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:58.803237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:58.836341   64287 cri.go:89] found id: ""
	I1009 20:20:58.836366   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.836374   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:58.836379   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:58.836437   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:58.872048   64287 cri.go:89] found id: ""
	I1009 20:20:58.872071   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.872081   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:58.872091   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:58.872106   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:58.950133   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:58.950167   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.988529   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:58.988555   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:59.038377   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:59.038414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:59.053398   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:59.053448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:59.120793   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:01.621691   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:01.634505   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:01.634563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:01.670785   64287 cri.go:89] found id: ""
	I1009 20:21:01.670818   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.670826   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:01.670833   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:01.670897   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:01.712219   64287 cri.go:89] found id: ""
	I1009 20:21:01.712243   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.712255   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:01.712261   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:01.712307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:01.747175   64287 cri.go:89] found id: ""
	I1009 20:21:01.747204   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.747215   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:01.747222   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:01.747282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:01.785359   64287 cri.go:89] found id: ""
	I1009 20:21:01.785382   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.785389   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:01.785396   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:01.785452   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:01.822385   64287 cri.go:89] found id: ""
	I1009 20:21:01.822415   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.822426   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:01.822433   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:01.822501   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:01.860839   64287 cri.go:89] found id: ""
	I1009 20:21:01.860871   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.860880   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:01.860889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:01.860935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:01.899191   64287 cri.go:89] found id: ""
	I1009 20:21:01.899215   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.899224   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:01.899232   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:01.899288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:01.936692   64287 cri.go:89] found id: ""
	I1009 20:21:01.936721   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.936729   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:01.936737   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:01.936748   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:02.014848   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:02.014883   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:02.058815   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:02.058846   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:02.110513   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:02.110543   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:02.123855   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:02.123878   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:02.193997   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:04.694766   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:04.707675   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:04.707743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:04.741322   64287 cri.go:89] found id: ""
	I1009 20:21:04.741354   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.741365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:04.741374   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:04.741435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:04.780649   64287 cri.go:89] found id: ""
	I1009 20:21:04.780676   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.780686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:04.780694   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:04.780749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:04.817514   64287 cri.go:89] found id: ""
	I1009 20:21:04.817545   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.817557   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:04.817564   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:04.817672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:04.850848   64287 cri.go:89] found id: ""
	I1009 20:21:04.850871   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.850878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:04.850885   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:04.850942   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:04.885390   64287 cri.go:89] found id: ""
	I1009 20:21:04.885426   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.885438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:04.885449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:04.885513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:04.920199   64287 cri.go:89] found id: ""
	I1009 20:21:04.920221   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.920229   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:04.920235   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:04.920307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:04.954597   64287 cri.go:89] found id: ""
	I1009 20:21:04.954619   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.954627   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:04.954634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:04.954693   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:04.988236   64287 cri.go:89] found id: ""
	I1009 20:21:04.988262   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.988270   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:04.988278   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:04.988289   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:05.039909   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:05.039939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:05.053556   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:05.053583   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:05.126596   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:05.126618   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:05.126628   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:05.202275   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:05.202309   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:07.740836   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:07.754095   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:07.754165   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:07.786584   64287 cri.go:89] found id: ""
	I1009 20:21:07.786613   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.786621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:07.786627   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:07.786672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:07.822365   64287 cri.go:89] found id: ""
	I1009 20:21:07.822388   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.822396   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:07.822410   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:07.822456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:07.858058   64287 cri.go:89] found id: ""
	I1009 20:21:07.858083   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.858093   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:07.858100   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:07.858156   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:07.894319   64287 cri.go:89] found id: ""
	I1009 20:21:07.894345   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.894352   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:07.894358   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:07.894422   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:07.928620   64287 cri.go:89] found id: ""
	I1009 20:21:07.928648   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.928659   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:07.928667   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:07.928724   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:07.964923   64287 cri.go:89] found id: ""
	I1009 20:21:07.964956   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.964967   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:07.964976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:07.965035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:07.998308   64287 cri.go:89] found id: ""
	I1009 20:21:07.998336   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.998347   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:07.998354   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:07.998402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:08.032021   64287 cri.go:89] found id: ""
	I1009 20:21:08.032047   64287 logs.go:282] 0 containers: []
	W1009 20:21:08.032059   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:08.032070   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:08.032084   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:08.103843   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:08.103867   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:08.103882   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:08.185476   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:08.185507   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:08.226967   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:08.226994   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:08.304852   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:08.304887   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:10.819345   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:10.832902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:10.832963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:10.873237   64287 cri.go:89] found id: ""
	I1009 20:21:10.873268   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.873279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:10.873286   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:10.873350   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:10.907296   64287 cri.go:89] found id: ""
	I1009 20:21:10.907316   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.907324   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:10.907329   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:10.907377   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:10.946428   64287 cri.go:89] found id: ""
	I1009 20:21:10.946469   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.946481   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:10.946487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:10.946540   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:10.982175   64287 cri.go:89] found id: ""
	I1009 20:21:10.982199   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.982207   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:10.982212   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:10.982259   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:11.016197   64287 cri.go:89] found id: ""
	I1009 20:21:11.016220   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.016243   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:11.016250   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:11.016318   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:11.055697   64287 cri.go:89] found id: ""
	I1009 20:21:11.055723   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.055732   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:11.055740   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:11.055806   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:11.093444   64287 cri.go:89] found id: ""
	I1009 20:21:11.093469   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.093480   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:11.093487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:11.093548   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:11.133224   64287 cri.go:89] found id: ""
	I1009 20:21:11.133252   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.133266   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:11.133276   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:11.133291   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:11.189020   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:11.189057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:11.202652   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:11.202682   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:11.272789   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:11.272811   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:11.272824   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:11.354868   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:11.354904   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:13.896655   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:13.910126   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:13.910189   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:13.944472   64287 cri.go:89] found id: ""
	I1009 20:21:13.944497   64287 logs.go:282] 0 containers: []
	W1009 20:21:13.944505   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:13.944511   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:13.944566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:14.003362   64287 cri.go:89] found id: ""
	I1009 20:21:14.003387   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.003397   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:14.003407   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:14.003470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:14.037691   64287 cri.go:89] found id: ""
	I1009 20:21:14.037717   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.037726   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:14.037732   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:14.037792   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:14.079333   64287 cri.go:89] found id: ""
	I1009 20:21:14.079358   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.079368   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:14.079375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:14.079433   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:14.120821   64287 cri.go:89] found id: ""
	I1009 20:21:14.120843   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.120851   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:14.120857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:14.120904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:14.161089   64287 cri.go:89] found id: ""
	I1009 20:21:14.161118   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.161128   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:14.161135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:14.161193   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:14.201711   64287 cri.go:89] found id: ""
	I1009 20:21:14.201739   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.201748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:14.201756   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:14.201814   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:14.238469   64287 cri.go:89] found id: ""
	I1009 20:21:14.238502   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.238512   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:14.238520   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:14.238531   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:14.289786   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:14.289821   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:14.303876   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:14.303903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:14.376426   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:14.376446   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:14.376459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:14.458058   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:14.458095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:17.000623   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:17.015890   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:17.015963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:17.054136   64287 cri.go:89] found id: ""
	I1009 20:21:17.054166   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.054177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:17.054185   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:17.054242   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:17.089501   64287 cri.go:89] found id: ""
	I1009 20:21:17.089538   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.089548   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:17.089556   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:17.089614   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:17.128042   64287 cri.go:89] found id: ""
	I1009 20:21:17.128066   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.128073   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:17.128079   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:17.128126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:17.164663   64287 cri.go:89] found id: ""
	I1009 20:21:17.164689   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.164697   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:17.164703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:17.164766   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:17.200865   64287 cri.go:89] found id: ""
	I1009 20:21:17.200891   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.200899   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:17.200906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:17.200963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:17.241649   64287 cri.go:89] found id: ""
	I1009 20:21:17.241675   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.241683   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:17.241690   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:17.241749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:17.277390   64287 cri.go:89] found id: ""
	I1009 20:21:17.277424   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.277436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:17.277449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:17.277515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:17.316942   64287 cri.go:89] found id: ""
	I1009 20:21:17.316973   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.316985   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:17.316995   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:17.317015   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:17.360293   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:17.360322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:17.413510   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:17.413546   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:17.427280   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:17.427310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:17.509531   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:17.509551   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:17.509566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:20.092463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:20.106101   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:20.106168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:20.147889   64287 cri.go:89] found id: ""
	I1009 20:21:20.147916   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.147925   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:20.147931   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:20.147980   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:20.183097   64287 cri.go:89] found id: ""
	I1009 20:21:20.183167   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.183179   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:20.183185   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:20.183233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:20.217556   64287 cri.go:89] found id: ""
	I1009 20:21:20.217585   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.217596   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:20.217604   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:20.217661   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:20.256692   64287 cri.go:89] found id: ""
	I1009 20:21:20.256717   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.256728   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:20.256735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:20.256797   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:20.290866   64287 cri.go:89] found id: ""
	I1009 20:21:20.290888   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.290896   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:20.290902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:20.290954   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:20.326802   64287 cri.go:89] found id: ""
	I1009 20:21:20.326828   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.326836   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:20.326842   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:20.326901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:20.362395   64287 cri.go:89] found id: ""
	I1009 20:21:20.362426   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.362436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:20.362442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:20.362504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:20.408354   64287 cri.go:89] found id: ""
	I1009 20:21:20.408381   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.408391   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:20.408400   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:20.408415   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:20.426669   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:20.426694   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:20.525895   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:20.525927   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:20.525939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:20.612620   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:20.612654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:20.653152   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:20.653179   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.205516   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:23.218432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:23.218493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:23.254327   64287 cri.go:89] found id: ""
	I1009 20:21:23.254355   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.254365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:23.254372   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:23.254429   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:23.295411   64287 cri.go:89] found id: ""
	I1009 20:21:23.295437   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.295448   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:23.295463   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:23.295523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:23.331631   64287 cri.go:89] found id: ""
	I1009 20:21:23.331661   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.331672   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:23.331679   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:23.331742   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:23.366114   64287 cri.go:89] found id: ""
	I1009 20:21:23.366139   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.366147   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:23.366152   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:23.366200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:23.403549   64287 cri.go:89] found id: ""
	I1009 20:21:23.403580   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.403587   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:23.403593   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:23.403652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:23.439231   64287 cri.go:89] found id: ""
	I1009 20:21:23.439254   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.439263   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:23.439268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:23.439322   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:23.473417   64287 cri.go:89] found id: ""
	I1009 20:21:23.473441   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.473449   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:23.473455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:23.473503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:23.506129   64287 cri.go:89] found id: ""
	I1009 20:21:23.506151   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.506159   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:23.506166   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:23.506176   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:23.546813   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:23.546836   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.599317   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:23.599346   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:23.612400   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:23.612426   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:23.684905   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:23.684924   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:23.684936   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:26.267079   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:26.282873   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:26.282946   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:26.319632   64287 cri.go:89] found id: ""
	I1009 20:21:26.319657   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.319665   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:26.319671   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:26.319716   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:26.362263   64287 cri.go:89] found id: ""
	I1009 20:21:26.362290   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.362299   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:26.362306   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:26.362401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:26.412274   64287 cri.go:89] found id: ""
	I1009 20:21:26.412309   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.412320   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:26.412332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:26.412391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:26.446754   64287 cri.go:89] found id: ""
	I1009 20:21:26.446774   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.446783   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:26.446788   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:26.446838   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:26.480333   64287 cri.go:89] found id: ""
	I1009 20:21:26.480359   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.480367   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:26.480375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:26.480438   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:26.518440   64287 cri.go:89] found id: ""
	I1009 20:21:26.518469   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.518479   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:26.518486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:26.518555   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:26.555100   64287 cri.go:89] found id: ""
	I1009 20:21:26.555127   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.555138   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:26.555146   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:26.555208   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:26.594515   64287 cri.go:89] found id: ""
	I1009 20:21:26.594538   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.594550   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:26.594559   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:26.594573   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:26.647465   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:26.647511   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:26.661021   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:26.661042   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:26.732233   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:26.732265   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:26.732286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:26.813104   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:26.813143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:29.361485   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:29.374578   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:29.374647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:29.409740   64287 cri.go:89] found id: ""
	I1009 20:21:29.409766   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.409774   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:29.409781   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:29.409826   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:29.443932   64287 cri.go:89] found id: ""
	I1009 20:21:29.443959   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.443970   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:29.443978   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:29.444070   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:29.485900   64287 cri.go:89] found id: ""
	I1009 20:21:29.485927   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.485935   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:29.485940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:29.485994   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:29.527976   64287 cri.go:89] found id: ""
	I1009 20:21:29.528002   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.528013   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:29.528021   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:29.528080   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:29.572186   64287 cri.go:89] found id: ""
	I1009 20:21:29.572214   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.572235   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:29.572243   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:29.572310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:29.612166   64287 cri.go:89] found id: ""
	I1009 20:21:29.612190   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.612200   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:29.612208   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:29.612267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:29.646269   64287 cri.go:89] found id: ""
	I1009 20:21:29.646294   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.646312   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:29.646319   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:29.646375   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:29.680624   64287 cri.go:89] found id: ""
	I1009 20:21:29.680649   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.680656   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:29.680663   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:29.680673   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:29.729251   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:29.729278   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:29.742746   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:29.742773   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:29.815128   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:29.815150   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:29.815164   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:29.893418   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:29.893448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.433532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:32.447090   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:32.447161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:32.482662   64287 cri.go:89] found id: ""
	I1009 20:21:32.482688   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.482696   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:32.482702   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:32.482755   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:32.521292   64287 cri.go:89] found id: ""
	I1009 20:21:32.521321   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.521329   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:32.521337   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:32.521393   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:32.555868   64287 cri.go:89] found id: ""
	I1009 20:21:32.555894   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.555901   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:32.555906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:32.555956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:32.593541   64287 cri.go:89] found id: ""
	I1009 20:21:32.593563   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.593570   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:32.593575   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:32.593632   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:32.627712   64287 cri.go:89] found id: ""
	I1009 20:21:32.627740   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.627751   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:32.627758   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:32.627816   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:32.660632   64287 cri.go:89] found id: ""
	I1009 20:21:32.660658   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.660669   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:32.660677   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:32.660733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:32.697709   64287 cri.go:89] found id: ""
	I1009 20:21:32.697737   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.697748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:32.697755   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:32.697810   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:32.734782   64287 cri.go:89] found id: ""
	I1009 20:21:32.734806   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.734816   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:32.734827   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:32.734840   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:32.809239   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:32.809271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.857109   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:32.857143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:32.915156   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:32.915185   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:32.929782   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:32.929813   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:32.996321   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:35.497013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:35.510645   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:35.510714   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:35.543840   64287 cri.go:89] found id: ""
	I1009 20:21:35.543869   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.543878   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:35.543883   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:35.543929   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:35.579206   64287 cri.go:89] found id: ""
	I1009 20:21:35.579235   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.579246   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:35.579254   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:35.579312   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:35.613362   64287 cri.go:89] found id: ""
	I1009 20:21:35.613393   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.613406   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:35.613414   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:35.613484   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:35.649553   64287 cri.go:89] found id: ""
	I1009 20:21:35.649584   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.649596   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:35.649605   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:35.649672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:35.688665   64287 cri.go:89] found id: ""
	I1009 20:21:35.688695   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.688706   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:35.688714   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:35.688771   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:35.725958   64287 cri.go:89] found id: ""
	I1009 20:21:35.725979   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.725987   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:35.725993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:35.726047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:35.758368   64287 cri.go:89] found id: ""
	I1009 20:21:35.758395   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.758405   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:35.758410   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:35.758455   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:35.790323   64287 cri.go:89] found id: ""
	I1009 20:21:35.790347   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.790357   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:35.790367   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:35.790380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:35.843721   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:35.843752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:35.858894   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:35.858915   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:35.934242   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:35.934261   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:35.934273   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:36.016029   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:36.016062   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.554219   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:38.567266   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:38.567339   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:38.606292   64287 cri.go:89] found id: ""
	I1009 20:21:38.606328   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.606338   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:38.606344   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:38.606396   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:38.638807   64287 cri.go:89] found id: ""
	I1009 20:21:38.638831   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.638841   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:38.638849   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:38.638907   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:38.677635   64287 cri.go:89] found id: ""
	I1009 20:21:38.677665   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.677674   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:38.677682   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:38.677740   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:38.714847   64287 cri.go:89] found id: ""
	I1009 20:21:38.714870   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.714878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:38.714886   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:38.714944   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:38.746460   64287 cri.go:89] found id: ""
	I1009 20:21:38.746487   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.746495   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:38.746501   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:38.746554   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:38.782027   64287 cri.go:89] found id: ""
	I1009 20:21:38.782055   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.782066   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:38.782073   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:38.782130   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:38.816859   64287 cri.go:89] found id: ""
	I1009 20:21:38.816885   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.816893   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:38.816899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:38.816961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:38.857159   64287 cri.go:89] found id: ""
	I1009 20:21:38.857195   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.857204   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:38.857212   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:38.857224   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:38.913209   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:38.913240   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:38.927593   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:38.927617   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:38.998178   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:38.998213   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:38.998226   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:39.080681   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:39.080716   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:41.620092   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:41.633491   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:41.633564   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:41.671087   64287 cri.go:89] found id: ""
	I1009 20:21:41.671114   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.671123   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:41.671128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:41.671184   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:41.706940   64287 cri.go:89] found id: ""
	I1009 20:21:41.706966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.706976   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:41.706984   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:41.707036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:41.745612   64287 cri.go:89] found id: ""
	I1009 20:21:41.745637   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.745646   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:41.745651   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:41.745706   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:41.786857   64287 cri.go:89] found id: ""
	I1009 20:21:41.786884   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.786895   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:41.786904   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:41.786958   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:41.825005   64287 cri.go:89] found id: ""
	I1009 20:21:41.825030   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.825041   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:41.825053   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:41.825100   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:41.863089   64287 cri.go:89] found id: ""
	I1009 20:21:41.863111   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.863118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:41.863124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:41.863169   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:41.907937   64287 cri.go:89] found id: ""
	I1009 20:21:41.907966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.907980   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:41.907988   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:41.908047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:41.948189   64287 cri.go:89] found id: ""
	I1009 20:21:41.948219   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.948229   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:41.948243   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:41.948257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:41.993008   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:41.993038   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:42.045831   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:42.045864   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:42.060255   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:42.060280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:42.127657   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:42.127680   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:42.127696   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:44.713209   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:44.725754   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:44.725825   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:44.760976   64287 cri.go:89] found id: ""
	I1009 20:21:44.760997   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.761004   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:44.761011   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:44.761053   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:44.796955   64287 cri.go:89] found id: ""
	I1009 20:21:44.796977   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.796985   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:44.796991   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:44.797036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:44.832558   64287 cri.go:89] found id: ""
	I1009 20:21:44.832590   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.832601   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:44.832608   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:44.832667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:44.867869   64287 cri.go:89] found id: ""
	I1009 20:21:44.867898   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.867908   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:44.867916   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:44.867966   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:44.901395   64287 cri.go:89] found id: ""
	I1009 20:21:44.901423   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.901434   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:44.901442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:44.901505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:44.939276   64287 cri.go:89] found id: ""
	I1009 20:21:44.939310   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.939323   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:44.939337   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:44.939399   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:44.973692   64287 cri.go:89] found id: ""
	I1009 20:21:44.973719   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.973728   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:44.973734   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:44.973782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:45.007406   64287 cri.go:89] found id: ""
	I1009 20:21:45.007436   64287 logs.go:282] 0 containers: []
	W1009 20:21:45.007446   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:45.007457   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:45.007472   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:45.062199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:45.062233   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:45.075739   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:45.075763   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:45.147623   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:45.147639   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:45.147654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:45.229252   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:45.229286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:47.777208   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:47.794054   64287 kubeadm.go:597] duration metric: took 4m2.743382732s to restartPrimaryControlPlane
	W1009 20:21:47.794132   64287 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:47.794159   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:48.789863   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:48.804981   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:48.815981   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:48.826318   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:48.826340   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:48.826390   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:48.838918   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:48.838976   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:48.851635   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:48.864173   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:48.864237   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:48.874606   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.885036   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:48.885097   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.894870   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:48.904993   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:48.905040   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:48.915393   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:49.145081   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:23:45.402502   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:23:45.402618   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:23:45.404210   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:45.404308   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:45.404415   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:45.404554   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:45.404699   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:45.404776   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:45.406561   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:45.406656   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:45.406713   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:45.406832   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:45.406929   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:45.407025   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:45.407132   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:45.407247   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:45.407350   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:45.407466   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:45.407586   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:45.407659   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:45.407756   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:45.407850   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:45.407937   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:45.408016   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:45.408074   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:45.408202   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:45.408335   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:45.408407   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:45.408510   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:45.410040   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:45.410141   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:45.410231   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:45.410330   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:45.410409   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:45.410546   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:23:45.410589   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:23:45.410653   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.410810   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.410872   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411059   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411164   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411367   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411428   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411606   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411674   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411825   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411832   64287 kubeadm.go:310] 
	I1009 20:23:45.411865   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:23:45.411909   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:23:45.411928   64287 kubeadm.go:310] 
	I1009 20:23:45.411974   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:23:45.412018   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:23:45.412138   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:23:45.412155   64287 kubeadm.go:310] 
	I1009 20:23:45.412300   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:23:45.412344   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:23:45.412393   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:23:45.412400   64287 kubeadm.go:310] 
	I1009 20:23:45.412516   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:23:45.412618   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:23:45.412631   64287 kubeadm.go:310] 
	I1009 20:23:45.412764   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:23:45.412885   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:23:45.412996   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:23:45.413059   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:23:45.413078   64287 kubeadm.go:310] 
	W1009 20:23:45.413176   64287 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:23:45.413219   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:23:45.881931   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:23:45.897391   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:23:45.907598   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:23:45.907621   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:23:45.907668   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:23:45.917540   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:23:45.917585   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:23:45.927278   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:23:45.937054   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:23:45.937109   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:23:45.946544   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.956863   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:23:45.956901   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.966184   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:23:45.975335   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:23:45.975385   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:23:45.984552   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:23:46.063271   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:46.063380   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:46.213340   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:46.213511   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:46.213652   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:46.388334   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:46.390196   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:46.390303   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:46.390384   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:46.390499   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:46.390606   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:46.390710   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:46.390799   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:46.390899   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:46.390975   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:46.391097   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:46.391196   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:46.391268   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:46.391355   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:46.513116   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:46.906952   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:47.053715   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:47.184809   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:47.207139   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:47.208338   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:47.208424   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:47.362764   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:47.364703   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:47.364823   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:47.377925   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:47.379842   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:47.380533   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:47.382819   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:24:27.385438   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:24:27.385546   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:27.385726   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:32.386071   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:32.386268   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:42.386802   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:42.386979   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:02.388082   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:02.388300   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.388787   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:42.389021   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.389080   64287 kubeadm.go:310] 
	I1009 20:25:42.389329   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:25:42.389524   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:25:42.389545   64287 kubeadm.go:310] 
	I1009 20:25:42.389625   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:25:42.389680   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:25:42.389832   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:25:42.389846   64287 kubeadm.go:310] 
	I1009 20:25:42.389963   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:25:42.390019   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:25:42.390066   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:25:42.390081   64287 kubeadm.go:310] 
	I1009 20:25:42.390201   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:25:42.390312   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:25:42.390321   64287 kubeadm.go:310] 
	I1009 20:25:42.390438   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:25:42.390550   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:25:42.390671   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:25:42.390779   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:25:42.390791   64287 kubeadm.go:310] 
	I1009 20:25:42.391382   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:25:42.391507   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:25:42.391606   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:25:42.391673   64287 kubeadm.go:394] duration metric: took 7m57.392748571s to StartCluster
	I1009 20:25:42.391719   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:25:42.391785   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:25:42.439581   64287 cri.go:89] found id: ""
	I1009 20:25:42.439610   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.439621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:25:42.439628   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:25:42.439695   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:25:42.476205   64287 cri.go:89] found id: ""
	I1009 20:25:42.476231   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.476238   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:25:42.476243   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:25:42.476297   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:25:42.528317   64287 cri.go:89] found id: ""
	I1009 20:25:42.528342   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.528350   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:25:42.528356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:25:42.528413   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:25:42.564857   64287 cri.go:89] found id: ""
	I1009 20:25:42.564885   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.564893   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:25:42.564899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:25:42.564956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:25:42.600053   64287 cri.go:89] found id: ""
	I1009 20:25:42.600081   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.600088   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:25:42.600094   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:25:42.600146   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:25:42.636997   64287 cri.go:89] found id: ""
	I1009 20:25:42.637026   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.637034   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:25:42.637047   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:25:42.637107   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:25:42.672228   64287 cri.go:89] found id: ""
	I1009 20:25:42.672255   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.672266   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:25:42.672273   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:25:42.672331   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:25:42.711696   64287 cri.go:89] found id: ""
	I1009 20:25:42.711727   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.711737   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:25:42.711749   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:25:42.711764   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:25:42.764839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:25:42.764876   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:25:42.778484   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:25:42.778512   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:25:42.864830   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:25:42.864859   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:25:42.864874   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:25:42.975355   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:25:42.975389   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:25:43.015247   64287 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:25:43.015307   64287 out.go:270] * 
	* 
	W1009 20:25:43.015375   64287 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.015392   64287 out.go:270] * 
	* 
	W1009 20:25:43.016664   64287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:25:43.020135   64287 out.go:201] 
	W1009 20:25:43.021388   64287 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.021427   64287 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:25:43.021453   64287 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:25:43.022804   64287 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-169021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 2 (232.369529ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-169021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-169021 logs -n 25: (1.51715847s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-790037                           | kubernetes-upgrade-790037    | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:07 UTC |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-615869 sudo                            | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-615869                                 | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:08 UTC |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-480205             | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-324052 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | disable-driver-mounts-324052                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:10 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503330            | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC | 09 Oct 24 20:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-733270  | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-480205                  | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169021        | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-503330                 | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-733270       | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:22 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169021             | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:13:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:13:44.614940   64287 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:13:44.615052   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615076   64287 out.go:358] Setting ErrFile to fd 2...
	I1009 20:13:44.615081   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615239   64287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:13:44.615728   64287 out.go:352] Setting JSON to false
	I1009 20:13:44.616598   64287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6966,"bootTime":1728497859,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:13:44.616678   64287 start.go:139] virtualization: kvm guest
	I1009 20:13:44.618709   64287 out.go:177] * [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:13:44.619813   64287 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:13:44.619841   64287 notify.go:220] Checking for updates...
	I1009 20:13:44.621876   64287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:13:44.623226   64287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:13:44.624576   64287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:13:44.625863   64287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:13:44.627027   64287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:13:44.628559   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:13:44.628948   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.629014   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.644138   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I1009 20:13:44.644537   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.645045   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.645067   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.645380   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.645557   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.647115   64287 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 20:13:44.648228   64287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:13:44.648491   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.648529   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.663211   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I1009 20:13:44.663674   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.664164   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.664192   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.664482   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.664648   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.697395   64287 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:13:44.698580   64287 start.go:297] selected driver: kvm2
	I1009 20:13:44.698591   64287 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.698719   64287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:13:44.699437   64287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.699521   64287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:13:44.713190   64287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:13:44.713567   64287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:13:44.713600   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:13:44.713640   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:13:44.713673   64287 start.go:340] cluster config:
	{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.713805   64287 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.716209   64287 out.go:177] * Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	I1009 20:13:44.717364   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:13:44.717399   64287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:13:44.717409   64287 cache.go:56] Caching tarball of preloaded images
	I1009 20:13:44.717485   64287 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:13:44.717495   64287 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:13:44.717594   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:13:44.717753   64287 start.go:360] acquireMachinesLock for old-k8s-version-169021: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:13:48.943307   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:52.015296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:58.095330   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:01.167322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:07.247325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:10.323296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:16.399318   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:19.471371   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:25.551279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:28.623322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:34.703301   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:37.775281   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:43.855344   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:46.927300   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:53.007389   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:56.079332   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:02.159290   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:05.231351   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:11.311339   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:14.383289   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:20.463287   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:23.535402   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:29.615312   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:32.687319   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:38.767323   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:41.839306   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:47.919325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:50.991292   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:57.071390   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:00.143404   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:06.223291   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:09.295298   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:15.375349   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:18.447271   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:24.527327   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:27.599279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:30.604005   63744 start.go:364] duration metric: took 3m52.142985964s to acquireMachinesLock for "embed-certs-503330"
	I1009 20:16:30.604068   63744 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:30.604076   63744 fix.go:54] fixHost starting: 
	I1009 20:16:30.604520   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:30.604571   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:30.620743   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I1009 20:16:30.621433   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:30.621936   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:16:30.621961   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:30.622323   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:30.622490   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:30.622654   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:16:30.624257   63744 fix.go:112] recreateIfNeeded on embed-certs-503330: state=Stopped err=<nil>
	I1009 20:16:30.624295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	W1009 20:16:30.624542   63744 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:30.627103   63744 out.go:177] * Restarting existing kvm2 VM for "embed-certs-503330" ...
	I1009 20:16:30.601719   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:30.601759   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602048   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:16:30.602078   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602263   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:16:30.603862   63427 machine.go:96] duration metric: took 4m37.428982059s to provisionDockerMachine
	I1009 20:16:30.603905   63427 fix.go:56] duration metric: took 4m37.449834405s for fixHost
	I1009 20:16:30.603915   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 4m37.449856097s
	W1009 20:16:30.603942   63427 start.go:714] error starting host: provision: host is not running
	W1009 20:16:30.604043   63427 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1009 20:16:30.604052   63427 start.go:729] Will try again in 5 seconds ...
	I1009 20:16:30.628558   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Start
	I1009 20:16:30.628718   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring networks are active...
	I1009 20:16:30.629440   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network default is active
	I1009 20:16:30.629760   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network mk-embed-certs-503330 is active
	I1009 20:16:30.630197   63744 main.go:141] libmachine: (embed-certs-503330) Getting domain xml...
	I1009 20:16:30.630952   63744 main.go:141] libmachine: (embed-certs-503330) Creating domain...
	I1009 20:16:31.808982   63744 main.go:141] libmachine: (embed-certs-503330) Waiting to get IP...
	I1009 20:16:31.809856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:31.810317   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:31.810463   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:31.810307   64895 retry.go:31] will retry after 287.246953ms: waiting for machine to come up
	I1009 20:16:32.098815   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.099474   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.099513   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.099422   64895 retry.go:31] will retry after 323.155152ms: waiting for machine to come up
	I1009 20:16:32.424145   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.424618   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.424646   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.424576   64895 retry.go:31] will retry after 410.947245ms: waiting for machine to come up
	I1009 20:16:32.837351   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.837773   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.837823   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.837735   64895 retry.go:31] will retry after 562.56411ms: waiting for machine to come up
	I1009 20:16:35.605597   63427 start.go:360] acquireMachinesLock for no-preload-480205: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:16:33.401377   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.401828   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.401877   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.401781   64895 retry.go:31] will retry after 460.104327ms: waiting for machine to come up
	I1009 20:16:33.863457   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.863854   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.863880   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.863815   64895 retry.go:31] will retry after 668.516186ms: waiting for machine to come up
	I1009 20:16:34.533619   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:34.534019   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:34.534054   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:34.533954   64895 retry.go:31] will retry after 966.757544ms: waiting for machine to come up
	I1009 20:16:35.501805   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:35.502178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:35.502200   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:35.502137   64895 retry.go:31] will retry after 1.017669155s: waiting for machine to come up
	I1009 20:16:36.521729   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:36.522150   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:36.522178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:36.522115   64895 retry.go:31] will retry after 1.292799206s: waiting for machine to come up
	I1009 20:16:37.816782   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:37.817187   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:37.817207   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:37.817156   64895 retry.go:31] will retry after 2.202935241s: waiting for machine to come up
	I1009 20:16:40.022666   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:40.023072   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:40.023101   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:40.023030   64895 retry.go:31] will retry after 2.360885318s: waiting for machine to come up
	I1009 20:16:42.385530   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:42.385947   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:42.385976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:42.385909   64895 retry.go:31] will retry after 2.1999082s: waiting for machine to come up
	I1009 20:16:44.588258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:44.588617   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:44.588649   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:44.588581   64895 retry.go:31] will retry after 3.345984614s: waiting for machine to come up
	I1009 20:16:47.937287   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937758   63744 main.go:141] libmachine: (embed-certs-503330) Found IP for machine: 192.168.50.97
	I1009 20:16:47.937785   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has current primary IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937790   63744 main.go:141] libmachine: (embed-certs-503330) Reserving static IP address...
	I1009 20:16:47.938195   63744 main.go:141] libmachine: (embed-certs-503330) Reserved static IP address: 192.168.50.97
	I1009 20:16:47.938231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.938241   63744 main.go:141] libmachine: (embed-certs-503330) Waiting for SSH to be available...
	I1009 20:16:47.938266   63744 main.go:141] libmachine: (embed-certs-503330) DBG | skip adding static IP to network mk-embed-certs-503330 - found existing host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"}
	I1009 20:16:47.938279   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Getting to WaitForSSH function...
	I1009 20:16:47.940214   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940468   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.940499   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940570   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH client type: external
	I1009 20:16:47.940605   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa (-rw-------)
	I1009 20:16:47.940639   63744 main.go:141] libmachine: (embed-certs-503330) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:16:47.940654   63744 main.go:141] libmachine: (embed-certs-503330) DBG | About to run SSH command:
	I1009 20:16:47.940660   63744 main.go:141] libmachine: (embed-certs-503330) DBG | exit 0
	I1009 20:16:48.066973   63744 main.go:141] libmachine: (embed-certs-503330) DBG | SSH cmd err, output: <nil>: 
	I1009 20:16:48.067404   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetConfigRaw
	I1009 20:16:48.068009   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.070587   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.070969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.070998   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.071241   63744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/config.json ...
	I1009 20:16:48.071426   63744 machine.go:93] provisionDockerMachine start ...
	I1009 20:16:48.071443   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:48.071655   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.074102   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.074448   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074560   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.074721   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074872   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074989   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.075156   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.075346   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.075358   63744 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:16:48.187275   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:16:48.187302   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187600   63744 buildroot.go:166] provisioning hostname "embed-certs-503330"
	I1009 20:16:48.187624   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187763   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.190220   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190585   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.190606   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190736   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.190932   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191110   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191251   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.191400   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.191608   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.191629   63744 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-503330 && echo "embed-certs-503330" | sudo tee /etc/hostname
	I1009 20:16:48.321932   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-503330
	
	I1009 20:16:48.321961   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.324976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.325393   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325542   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.325720   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.325856   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.326024   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.326360   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.326546   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.326570   63744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-503330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-503330/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-503330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:16:49.299713   64109 start.go:364] duration metric: took 3m11.699715872s to acquireMachinesLock for "default-k8s-diff-port-733270"
	I1009 20:16:49.299779   64109 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:49.299788   64109 fix.go:54] fixHost starting: 
	I1009 20:16:49.300158   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:49.300205   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:49.319769   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1009 20:16:49.320201   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:49.320678   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:16:49.320704   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:49.321107   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:49.321301   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:16:49.321463   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:16:49.322908   64109 fix.go:112] recreateIfNeeded on default-k8s-diff-port-733270: state=Stopped err=<nil>
	I1009 20:16:49.322943   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	W1009 20:16:49.323098   64109 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:49.324952   64109 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-733270" ...
	I1009 20:16:48.448176   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:48.448210   63744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:16:48.448243   63744 buildroot.go:174] setting up certificates
	I1009 20:16:48.448254   63744 provision.go:84] configureAuth start
	I1009 20:16:48.448267   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.448531   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.450984   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451384   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.451422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451479   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.453759   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454080   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.454106   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454202   63744 provision.go:143] copyHostCerts
	I1009 20:16:48.454273   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:16:48.454283   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:16:48.454362   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:16:48.454505   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:16:48.454517   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:16:48.454565   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:16:48.454650   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:16:48.454660   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:16:48.454696   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:16:48.454767   63744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.embed-certs-503330 san=[127.0.0.1 192.168.50.97 embed-certs-503330 localhost minikube]
	I1009 20:16:48.669251   63744 provision.go:177] copyRemoteCerts
	I1009 20:16:48.669335   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:16:48.669373   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.671969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.672258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.672629   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.672739   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.672856   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:48.756869   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:16:48.781853   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:16:48.805746   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:16:48.828729   63744 provision.go:87] duration metric: took 380.461988ms to configureAuth
	I1009 20:16:48.828774   63744 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:16:48.828972   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:16:48.829053   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.831590   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.831874   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.831896   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.832085   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.832273   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832411   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832545   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.832664   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.832906   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.832928   63744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:16:49.057643   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:16:49.057673   63744 machine.go:96] duration metric: took 986.233627ms to provisionDockerMachine
	I1009 20:16:49.057686   63744 start.go:293] postStartSetup for "embed-certs-503330" (driver="kvm2")
	I1009 20:16:49.057697   63744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:16:49.057713   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.057985   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:16:49.058013   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.060943   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061314   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.061336   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061544   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.061732   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.061891   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.062024   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.145757   63744 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:16:49.150378   63744 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:16:49.150407   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:16:49.150486   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:16:49.150589   63744 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:16:49.150697   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:16:49.160318   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:49.184297   63744 start.go:296] duration metric: took 126.596407ms for postStartSetup
	I1009 20:16:49.184337   63744 fix.go:56] duration metric: took 18.580262238s for fixHost
	I1009 20:16:49.184374   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.186720   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187020   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.187043   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187243   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.187435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187571   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187689   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.187812   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:49.187993   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:49.188005   63744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:16:49.299573   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505009.274901835
	
	I1009 20:16:49.299591   63744 fix.go:216] guest clock: 1728505009.274901835
	I1009 20:16:49.299610   63744 fix.go:229] Guest: 2024-10-09 20:16:49.274901835 +0000 UTC Remote: 2024-10-09 20:16:49.184353734 +0000 UTC m=+250.856887553 (delta=90.548101ms)
	I1009 20:16:49.299639   63744 fix.go:200] guest clock delta is within tolerance: 90.548101ms
	I1009 20:16:49.299644   63744 start.go:83] releasing machines lock for "embed-certs-503330", held for 18.695596427s
	I1009 20:16:49.299671   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.299949   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:49.302951   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303308   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.303337   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303494   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.303952   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304100   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304164   63744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:16:49.304213   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.304273   63744 ssh_runner.go:195] Run: cat /version.json
	I1009 20:16:49.304295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.306543   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306817   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.306856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306901   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307010   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307196   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307365   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.307387   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.307404   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307518   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.307612   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307778   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307974   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.308128   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.410624   63744 ssh_runner.go:195] Run: systemctl --version
	I1009 20:16:49.418412   63744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:16:49.567318   63744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:16:49.573238   63744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:16:49.573326   63744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:16:49.589269   63744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:16:49.589292   63744 start.go:495] detecting cgroup driver to use...
	I1009 20:16:49.589361   63744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:16:49.606654   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:16:49.621200   63744 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:16:49.621253   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:16:49.635346   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:16:49.649294   63744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:16:49.764096   63744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:16:49.892568   63744 docker.go:233] disabling docker service ...
	I1009 20:16:49.892650   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:16:49.907527   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:16:49.920395   63744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:16:50.067177   63744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:16:50.222407   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:16:50.236968   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:16:50.257005   63744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:16:50.257058   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.269955   63744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:16:50.270011   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.282633   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.296259   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.307683   63744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:16:50.320174   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.331518   63744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.350124   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.361327   63744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:16:50.371637   63744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:16:50.371707   63744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:16:50.385652   63744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:16:50.395762   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:50.521257   63744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:16:50.631377   63744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:16:50.631447   63744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:16:50.636594   63744 start.go:563] Will wait 60s for crictl version
	I1009 20:16:50.636643   63744 ssh_runner.go:195] Run: which crictl
	I1009 20:16:50.640677   63744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:16:50.693612   63744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:16:50.693695   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.724735   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.755820   63744 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:16:49.326372   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Start
	I1009 20:16:49.326507   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring networks are active...
	I1009 20:16:49.327206   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network default is active
	I1009 20:16:49.327553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network mk-default-k8s-diff-port-733270 is active
	I1009 20:16:49.327882   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Getting domain xml...
	I1009 20:16:49.328531   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Creating domain...
	I1009 20:16:50.594895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting to get IP...
	I1009 20:16:50.595715   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596086   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596183   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.596074   65019 retry.go:31] will retry after 205.766462ms: waiting for machine to come up
	I1009 20:16:50.803483   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.803974   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.804004   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.803914   65019 retry.go:31] will retry after 357.132949ms: waiting for machine to come up
	I1009 20:16:51.162582   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163122   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163163   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.163072   65019 retry.go:31] will retry after 316.280977ms: waiting for machine to come up
	I1009 20:16:51.480560   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481080   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481107   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.481029   65019 retry.go:31] will retry after 498.455228ms: waiting for machine to come up
	I1009 20:16:51.980618   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981136   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981165   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.981099   65019 retry.go:31] will retry after 595.314117ms: waiting for machine to come up
	I1009 20:16:50.757146   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:50.759889   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760334   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:50.760365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760613   63744 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 20:16:50.764810   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:50.777746   63744 kubeadm.go:883] updating cluster {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:16:50.777862   63744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:16:50.777926   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:50.816658   63744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:16:50.816722   63744 ssh_runner.go:195] Run: which lz4
	I1009 20:16:50.820880   63744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:16:50.825586   63744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:16:50.825614   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:16:52.206757   63744 crio.go:462] duration metric: took 1.385906608s to copy over tarball
	I1009 20:16:52.206837   63744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:16:52.577801   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578322   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578346   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:52.578269   65019 retry.go:31] will retry after 872.123349ms: waiting for machine to come up
	I1009 20:16:53.452602   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453038   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453068   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:53.452984   65019 retry.go:31] will retry after 727.985471ms: waiting for machine to come up
	I1009 20:16:54.182823   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:54.183181   65019 retry.go:31] will retry after 1.366580369s: waiting for machine to come up
	I1009 20:16:55.551983   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552452   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:55.552365   65019 retry.go:31] will retry after 1.327634108s: waiting for machine to come up
	I1009 20:16:56.881693   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882111   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882143   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:56.882061   65019 retry.go:31] will retry after 1.817770667s: waiting for machine to come up
	I1009 20:16:54.208830   63744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.001963207s)
	I1009 20:16:54.208858   63744 crio.go:469] duration metric: took 2.002072256s to extract the tarball
	I1009 20:16:54.208866   63744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:16:54.244727   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:54.287243   63744 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:16:54.287271   63744 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:16:54.287280   63744 kubeadm.go:934] updating node { 192.168.50.97 8443 v1.31.1 crio true true} ...
	I1009 20:16:54.287407   63744 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-503330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:16:54.287496   63744 ssh_runner.go:195] Run: crio config
	I1009 20:16:54.335950   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:16:54.335972   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:16:54.335992   63744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:16:54.336018   63744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.97 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-503330 NodeName:embed-certs-503330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:16:54.336171   63744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-503330"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:16:54.336230   63744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:16:54.346657   63744 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:16:54.346730   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:16:54.356150   63744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:16:54.372246   63744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:16:54.388168   63744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1009 20:16:54.404739   63744 ssh_runner.go:195] Run: grep 192.168.50.97	control-plane.minikube.internal$ /etc/hosts
	I1009 20:16:54.408599   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:54.421033   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:54.554324   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:54.571469   63744 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330 for IP: 192.168.50.97
	I1009 20:16:54.571493   63744 certs.go:194] generating shared ca certs ...
	I1009 20:16:54.571514   63744 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:54.571702   63744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:16:54.571755   63744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:16:54.571768   63744 certs.go:256] generating profile certs ...
	I1009 20:16:54.571890   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/client.key
	I1009 20:16:54.571977   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key.3496edbe
	I1009 20:16:54.572035   63744 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key
	I1009 20:16:54.572172   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:16:54.572212   63744 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:16:54.572225   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:16:54.572263   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:16:54.572295   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:16:54.572339   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:16:54.572395   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:54.573111   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:16:54.613670   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:16:54.647116   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:16:54.683687   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:16:54.722221   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:16:54.759929   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:16:54.787802   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:16:54.810019   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:16:54.832805   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:16:54.854772   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:16:54.878414   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:16:54.901850   63744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:16:54.918260   63744 ssh_runner.go:195] Run: openssl version
	I1009 20:16:54.923815   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:16:54.934350   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938733   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938799   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.944372   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:16:54.954950   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:16:54.965726   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970021   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970081   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.975568   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:16:54.986392   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:16:54.996852   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001051   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001096   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.006579   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:16:55.017264   63744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:16:55.021893   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:16:55.027729   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:16:55.033714   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:16:55.039641   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:16:55.045236   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:16:55.050855   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:16:55.056748   63744 kubeadm.go:392] StartCluster: {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:55.056833   63744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:16:55.056882   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.098936   63744 cri.go:89] found id: ""
	I1009 20:16:55.099014   63744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:16:55.109556   63744 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:16:55.109579   63744 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:16:55.109625   63744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:16:55.119379   63744 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:16:55.120348   63744 kubeconfig.go:125] found "embed-certs-503330" server: "https://192.168.50.97:8443"
	I1009 20:16:55.122330   63744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:16:55.131900   63744 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.97
	I1009 20:16:55.131927   63744 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:16:55.131936   63744 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:16:55.131978   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.171019   63744 cri.go:89] found id: ""
	I1009 20:16:55.171090   63744 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:16:55.188501   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:16:55.198221   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:16:55.198244   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:16:55.198304   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:16:55.207327   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:16:55.207371   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:16:55.216598   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:16:55.226558   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:16:55.226618   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:16:55.237485   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.246557   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:16:55.246604   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.257542   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:16:55.267040   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:16:55.267116   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:16:55.276472   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:16:55.285774   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:55.402155   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.327441   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.559638   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.623281   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.682538   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:16:56.682638   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.183012   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.682740   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.183107   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.702309   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702787   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702821   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:58.702713   65019 retry.go:31] will retry after 1.927245136s: waiting for machine to come up
	I1009 20:17:00.631448   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631884   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:00.631828   65019 retry.go:31] will retry after 2.288888745s: waiting for machine to come up
	I1009 20:16:58.683664   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.717388   63744 api_server.go:72] duration metric: took 2.034851204s to wait for apiserver process to appear ...
	I1009 20:16:58.717417   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:16:58.717441   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:16:58.717988   63744 api_server.go:269] stopped: https://192.168.50.97:8443/healthz: Get "https://192.168.50.97:8443/healthz": dial tcp 192.168.50.97:8443: connect: connection refused
	I1009 20:16:59.217777   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.473119   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.473153   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.473179   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.549848   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.549880   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.718137   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.722540   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:01.722571   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.217856   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.222606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:02.222638   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.718198   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.723729   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:17:02.729552   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:02.729582   63744 api_server.go:131] duration metric: took 4.01215752s to wait for apiserver health ...
	I1009 20:17:02.729594   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:17:02.729603   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:02.731426   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:02.732669   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:02.743408   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:02.762443   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:02.774604   63744 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:02.774647   63744 system_pods.go:61] "coredns-7c65d6cfc9-df57g" [6d86b5f4-6ab2-4313-9247-f2766bb2cd17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:02.774666   63744 system_pods.go:61] "etcd-embed-certs-503330" [c3d2f07e-3ea7-41ae-9247-0c79e5aeef7f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:02.774685   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [917f81d6-e4fb-41fe-8051-a1c645e35af8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:02.774693   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [d12d9ad5-e80a-4745-ae2d-3f24965de4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:02.774706   63744 system_pods.go:61] "kube-proxy-dsh65" [f027d12a-f0b8-45a9-a73d-1afdd80ef7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:17:02.774718   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [a42cdb71-099c-40a3-b474-ced8659ae391] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:02.774736   63744 system_pods.go:61] "metrics-server-6867b74b74-6z7jj" [58aa0ad3-3210-4722-a579-392688c91bae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:02.774752   63744 system_pods.go:61] "storage-provisioner" [3b0ab765-5bd6-44ac-866e-1c1168ad8ed9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:02.774765   63744 system_pods.go:74] duration metric: took 12.298201ms to wait for pod list to return data ...
	I1009 20:17:02.774777   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:02.785857   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:02.785882   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:02.785892   63744 node_conditions.go:105] duration metric: took 11.107216ms to run NodePressure ...
	I1009 20:17:02.785910   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:03.147197   63744 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150727   63744 kubeadm.go:739] kubelet initialised
	I1009 20:17:03.150746   63744 kubeadm.go:740] duration metric: took 3.5247ms waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150753   63744 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:03.155171   63744 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.160022   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160045   63744 pod_ready.go:82] duration metric: took 4.856483ms for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.160053   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160059   63744 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.165155   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165176   63744 pod_ready.go:82] duration metric: took 5.104415ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.165184   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165190   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.170669   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170684   63744 pod_ready.go:82] duration metric: took 5.48497ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.170691   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170697   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.175025   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175039   63744 pod_ready.go:82] duration metric: took 4.333372ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.175047   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175052   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:02.923370   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923752   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923780   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:02.923727   65019 retry.go:31] will retry after 2.87724378s: waiting for machine to come up
	I1009 20:17:05.803251   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803748   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803774   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:05.803698   65019 retry.go:31] will retry after 5.592307609s: waiting for machine to come up
	I1009 20:17:03.565676   63744 pod_ready.go:93] pod "kube-proxy-dsh65" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:03.565703   63744 pod_ready.go:82] duration metric: took 390.643175ms for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.565715   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:05.574374   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:08.072406   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:11.397365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397813   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Found IP for machine: 192.168.72.134
	I1009 20:17:11.397834   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has current primary IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397840   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserving static IP address...
	I1009 20:17:11.398220   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.398246   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | skip adding static IP to network mk-default-k8s-diff-port-733270 - found existing host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"}
	I1009 20:17:11.398259   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserved static IP address: 192.168.72.134
	I1009 20:17:11.398274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for SSH to be available...
	I1009 20:17:11.398291   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Getting to WaitForSSH function...
	I1009 20:17:11.400217   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400530   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.400553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400649   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH client type: external
	I1009 20:17:11.400675   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa (-rw-------)
	I1009 20:17:11.400710   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:11.400729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | About to run SSH command:
	I1009 20:17:11.400744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | exit 0
	I1009 20:17:11.526822   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:11.527202   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetConfigRaw
	I1009 20:17:11.527838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.530365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530702   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.530729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530978   64109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/config.json ...
	I1009 20:17:11.531187   64109 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:11.531204   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:11.531388   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.533307   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533646   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.533671   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533778   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.533949   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534088   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534181   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.534308   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.534521   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.534535   64109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:11.643315   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:11.643341   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643558   64109 buildroot.go:166] provisioning hostname "default-k8s-diff-port-733270"
	I1009 20:17:11.643580   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643746   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.646369   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646741   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.646771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646919   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.647087   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647249   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647363   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.647495   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.647698   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.647723   64109 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733270 && echo "default-k8s-diff-port-733270" | sudo tee /etc/hostname
	I1009 20:17:11.774094   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733270
	
	I1009 20:17:11.774129   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.776945   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.777318   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777450   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.777637   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777807   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777942   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.778077   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.778265   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.778290   64109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:11.899636   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:11.899666   64109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:11.899712   64109 buildroot.go:174] setting up certificates
	I1009 20:17:11.899729   64109 provision.go:84] configureAuth start
	I1009 20:17:11.899745   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.900007   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.902313   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902620   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.902647   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902783   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.904665   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.904999   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.905028   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.905121   64109 provision.go:143] copyHostCerts
	I1009 20:17:11.905194   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:11.905208   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:11.905274   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:11.905389   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:11.905403   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:11.905433   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:11.905506   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:11.905515   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:11.905543   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:11.905658   64109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733270 san=[127.0.0.1 192.168.72.134 default-k8s-diff-port-733270 localhost minikube]
	I1009 20:17:12.089469   64109 provision.go:177] copyRemoteCerts
	I1009 20:17:12.089537   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:12.089563   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.091929   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092210   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.092242   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092431   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.092601   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.092729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.092822   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.177787   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 20:17:12.201400   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:17:12.225416   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:12.247777   64109 provision.go:87] duration metric: took 348.034794ms to configureAuth
	I1009 20:17:12.247801   64109 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:12.247989   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:12.248077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.250489   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.250849   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.250880   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.251083   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.251281   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251515   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.251786   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.251973   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.251995   64109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:12.475656   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:12.475687   64109 machine.go:96] duration metric: took 944.487945ms to provisionDockerMachine
	I1009 20:17:12.475701   64109 start.go:293] postStartSetup for "default-k8s-diff-port-733270" (driver="kvm2")
	I1009 20:17:12.475714   64109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:12.475730   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.476033   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:12.476070   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.478464   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478809   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.478838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.479077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.479198   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.479330   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.719812   64287 start.go:364] duration metric: took 3m28.002029987s to acquireMachinesLock for "old-k8s-version-169021"
	I1009 20:17:12.719868   64287 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:12.719874   64287 fix.go:54] fixHost starting: 
	I1009 20:17:12.720288   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:12.720338   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:12.736888   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I1009 20:17:12.737330   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:12.737796   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:17:12.737818   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:12.738095   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:12.738284   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:12.738407   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetState
	I1009 20:17:12.740019   64287 fix.go:112] recreateIfNeeded on old-k8s-version-169021: state=Stopped err=<nil>
	I1009 20:17:12.740056   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	W1009 20:17:12.740218   64287 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:12.741971   64287 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-169021" ...
	I1009 20:17:10.572038   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:13.072273   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:12.566216   64109 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:12.570733   64109 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:12.570754   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:12.570811   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:12.570894   64109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:12.571002   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:12.580485   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:12.604494   64109 start.go:296] duration metric: took 128.779636ms for postStartSetup
	I1009 20:17:12.604528   64109 fix.go:56] duration metric: took 23.304740697s for fixHost
	I1009 20:17:12.604547   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.607253   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607579   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.607611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607762   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.607941   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608085   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608190   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.608315   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.608524   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.608542   64109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:12.719641   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505032.674262019
	
	I1009 20:17:12.719663   64109 fix.go:216] guest clock: 1728505032.674262019
	I1009 20:17:12.719672   64109 fix.go:229] Guest: 2024-10-09 20:17:12.674262019 +0000 UTC Remote: 2024-10-09 20:17:12.604532015 +0000 UTC m=+215.141542026 (delta=69.730004ms)
	I1009 20:17:12.719734   64109 fix.go:200] guest clock delta is within tolerance: 69.730004ms
	I1009 20:17:12.719742   64109 start.go:83] releasing machines lock for "default-k8s-diff-port-733270", held for 23.419984544s
	I1009 20:17:12.719771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.720009   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:12.722908   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.723308   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723449   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724041   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724196   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724276   64109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:12.724314   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.724356   64109 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:12.724376   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.726747   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727051   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727098   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727176   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727264   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727555   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.727586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727622   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727681   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.727738   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727865   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727993   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.728110   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.808408   64109 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:12.835630   64109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:12.989949   64109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:12.995824   64109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:12.995893   64109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:13.011680   64109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:13.011707   64109 start.go:495] detecting cgroup driver to use...
	I1009 20:17:13.011774   64109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:13.027110   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:13.040097   64109 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:13.040198   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:13.054001   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:13.068380   64109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:13.190626   64109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:13.367857   64109 docker.go:233] disabling docker service ...
	I1009 20:17:13.367921   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:13.385929   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:13.403253   64109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:13.528117   64109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:13.663611   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:13.679242   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:13.699707   64109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:13.699775   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.710685   64109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:13.710749   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.722116   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.732987   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.744601   64109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:13.755998   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.768759   64109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.788295   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.798784   64109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:13.808745   64109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:13.808810   64109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:13.823798   64109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:13.834854   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:13.959977   64109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:14.071531   64109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:14.071613   64109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:14.077348   64109 start.go:563] Will wait 60s for crictl version
	I1009 20:17:14.077412   64109 ssh_runner.go:195] Run: which crictl
	I1009 20:17:14.081272   64109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:14.120851   64109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:14.120951   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.148588   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.178661   64109 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:12.743057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .Start
	I1009 20:17:12.743249   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring networks are active...
	I1009 20:17:12.743940   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network default is active
	I1009 20:17:12.744263   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network mk-old-k8s-version-169021 is active
	I1009 20:17:12.744639   64287 main.go:141] libmachine: (old-k8s-version-169021) Getting domain xml...
	I1009 20:17:12.745331   64287 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:17:14.013679   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting to get IP...
	I1009 20:17:14.014647   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.015019   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.015101   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.015007   65185 retry.go:31] will retry after 236.047931ms: waiting for machine to come up
	I1009 20:17:14.252239   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.252610   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.252636   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.252568   65185 retry.go:31] will retry after 325.864911ms: waiting for machine to come up
	I1009 20:17:14.580315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.580940   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.580965   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.580878   65185 retry.go:31] will retry after 366.421043ms: waiting for machine to come up
	I1009 20:17:14.179897   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:14.183174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183497   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:14.183529   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183702   64109 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:14.187948   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:14.201218   64109 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:14.201341   64109 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:14.201381   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:14.237137   64109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:14.237210   64109 ssh_runner.go:195] Run: which lz4
	I1009 20:17:14.241492   64109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:14.246237   64109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:14.246270   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:17:15.633127   64109 crio.go:462] duration metric: took 1.391666515s to copy over tarball
	I1009 20:17:15.633221   64109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:15.073427   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.085878   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.574480   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:17.574502   63744 pod_ready.go:82] duration metric: took 14.00878017s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:17.574511   63744 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:14.949258   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.949766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.949800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.949726   65185 retry.go:31] will retry after 498.276481ms: waiting for machine to come up
	I1009 20:17:15.450160   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:15.450601   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:15.450635   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:15.450548   65185 retry.go:31] will retry after 742.118922ms: waiting for machine to come up
	I1009 20:17:16.194707   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.195193   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.195232   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.195137   65185 retry.go:31] will retry after 583.713263ms: waiting for machine to come up
	I1009 20:17:16.780844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.781277   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.781302   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.781215   65185 retry.go:31] will retry after 936.435146ms: waiting for machine to come up
	I1009 20:17:17.719083   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:17.719558   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:17.719588   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:17.719503   65185 retry.go:31] will retry after 1.046822117s: waiting for machine to come up
	I1009 20:17:18.768306   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:18.768844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:18.768872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:18.768798   65185 retry.go:31] will retry after 1.362599959s: waiting for machine to come up
	I1009 20:17:17.738682   64109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10542583s)
	I1009 20:17:17.738724   64109 crio.go:469] duration metric: took 2.105568099s to extract the tarball
	I1009 20:17:17.738733   64109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:17.779611   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:17.834267   64109 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:17:17.834291   64109 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:17:17.834299   64109 kubeadm.go:934] updating node { 192.168.72.134 8444 v1.31.1 crio true true} ...
	I1009 20:17:17.834384   64109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-733270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:17.834449   64109 ssh_runner.go:195] Run: crio config
	I1009 20:17:17.879236   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:17.879265   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:17.879286   64109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:17.879306   64109 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733270 NodeName:default-k8s-diff-port-733270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:17:17.879467   64109 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:17.879531   64109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:17:17.889847   64109 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:17.889945   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:17.899292   64109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1009 20:17:17.915656   64109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:17.931802   64109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1009 20:17:17.949046   64109 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:17.953042   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:17.966741   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:18.099697   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:18.120535   64109 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270 for IP: 192.168.72.134
	I1009 20:17:18.120555   64109 certs.go:194] generating shared ca certs ...
	I1009 20:17:18.120570   64109 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:18.120700   64109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:18.120734   64109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:18.120743   64109 certs.go:256] generating profile certs ...
	I1009 20:17:18.120813   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.key
	I1009 20:17:18.120867   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key.a935be89
	I1009 20:17:18.120910   64109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key
	I1009 20:17:18.121023   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:18.121053   64109 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:18.121065   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:18.121107   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:18.121131   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:18.121165   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:18.121217   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:18.121886   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:18.185147   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:18.221038   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:18.252242   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:18.295828   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 20:17:18.323898   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:18.348575   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:18.372580   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:18.396351   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:18.420726   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:18.444717   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:18.469594   64109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:18.485908   64109 ssh_runner.go:195] Run: openssl version
	I1009 20:17:18.492283   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:18.503167   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507900   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507952   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.513847   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:18.524101   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:18.534793   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539332   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539410   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.545077   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:18.555669   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:18.570727   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576515   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576585   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.582738   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:18.593855   64109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:18.598553   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:18.604755   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:18.611554   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:18.617835   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:18.623671   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:18.629288   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:18.634887   64109 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:18.634994   64109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:18.635040   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.676211   64109 cri.go:89] found id: ""
	I1009 20:17:18.676309   64109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:18.686685   64109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:18.686706   64109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:18.686758   64109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:18.696573   64109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:18.697474   64109 kubeconfig.go:125] found "default-k8s-diff-port-733270" server: "https://192.168.72.134:8444"
	I1009 20:17:18.699424   64109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:18.708661   64109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.134
	I1009 20:17:18.708693   64109 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:18.708705   64109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:18.708756   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.747781   64109 cri.go:89] found id: ""
	I1009 20:17:18.747852   64109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:18.765293   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:18.776296   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:18.776315   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:18.776363   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:17:18.785075   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:18.785132   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:18.794089   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:17:18.802663   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:18.802710   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:18.811834   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.820562   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:18.820611   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.829603   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:17:18.838162   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:18.838214   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:18.847131   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:18.856597   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:18.963398   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.093311   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.129878409s)
	I1009 20:17:20.093347   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.311144   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.405808   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.500323   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:20.500417   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.001420   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.501473   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.000842   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:19.581480   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:22.081200   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:20.133416   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:20.133841   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:20.133872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:20.133789   65185 retry.go:31] will retry after 1.900366713s: waiting for machine to come up
	I1009 20:17:22.036076   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:22.036465   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:22.036499   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:22.036421   65185 retry.go:31] will retry after 2.419471311s: waiting for machine to come up
	I1009 20:17:24.458015   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:24.458410   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:24.458441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:24.458379   65185 retry.go:31] will retry after 2.284501028s: waiting for machine to come up
	I1009 20:17:22.500576   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.517320   64109 api_server.go:72] duration metric: took 2.016990608s to wait for apiserver process to appear ...
	I1009 20:17:22.517349   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:17:22.517371   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.392466   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.392500   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.392516   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.432214   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.432243   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.518413   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.537284   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:25.537328   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.017494   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.022548   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.022581   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.518206   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.523173   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.523198   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:27.017735   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:27.022557   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:17:27.031462   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:27.031486   64109 api_server.go:131] duration metric: took 4.514131072s to wait for apiserver health ...
	I1009 20:17:27.031494   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:27.031500   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:27.033659   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:27.035055   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:27.045141   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:27.062887   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:27.070777   64109 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:27.070810   64109 system_pods.go:61] "coredns-7c65d6cfc9-vz7nx" [c9474b15-ac87-4b81-a239-6f4f3563c708] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:27.070820   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [ef686f1a-21a5-4058-b8ca-6e719415d778] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:27.070833   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [60a13042-6ddb-41c9-993b-a351aad64ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:27.070842   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [d876ca14-7014-4891-965a-83cadccc4416] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:27.070848   64109 system_pods.go:61] "kube-proxy-zr4bl" [4545b380-2d43-415a-97aa-c245a19d8aff] Running
	I1009 20:17:27.070859   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [d2ff89d7-03cf-430c-aa64-278d800d7fa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:27.070870   64109 system_pods.go:61] "metrics-server-6867b74b74-8p24l" [133ac2dc-236a-4ad6-886a-33b132ff5b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:27.070890   64109 system_pods.go:61] "storage-provisioner" [b82a4bd2-62d3-4eee-b17c-c0ae22b2bd3b] Running
	I1009 20:17:27.070902   64109 system_pods.go:74] duration metric: took 7.993626ms to wait for pod list to return data ...
	I1009 20:17:27.070914   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:27.074265   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:27.074290   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:27.074301   64109 node_conditions.go:105] duration metric: took 3.379591ms to run NodePressure ...
	I1009 20:17:27.074327   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:27.337687   64109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342418   64109 kubeadm.go:739] kubelet initialised
	I1009 20:17:27.342438   64109 kubeadm.go:740] duration metric: took 4.72219ms waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342446   64109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:27.347265   64109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.351569   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351587   64109 pod_ready.go:82] duration metric: took 4.298933ms for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.351595   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351600   64109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.355636   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355657   64109 pod_ready.go:82] duration metric: took 4.050576ms for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.355666   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355672   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.359739   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359758   64109 pod_ready.go:82] duration metric: took 4.080099ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.359767   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359773   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.466469   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466514   64109 pod_ready.go:82] duration metric: took 106.729243ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.466530   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466546   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:24.081959   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.581477   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.744084   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:26.744443   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:26.744468   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:26.744421   65185 retry.go:31] will retry after 2.772640247s: waiting for machine to come up
	I1009 20:17:29.519542   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:29.519877   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:29.519897   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:29.519854   65185 retry.go:31] will retry after 5.534511505s: waiting for machine to come up
	I1009 20:17:27.866362   64109 pod_ready.go:93] pod "kube-proxy-zr4bl" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:27.866389   64109 pod_ready.go:82] duration metric: took 399.82454ms for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.866401   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:29.872414   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.872979   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:29.081836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.580784   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.520055   63427 start.go:364] duration metric: took 1m0.914393022s to acquireMachinesLock for "no-preload-480205"
	I1009 20:17:36.520112   63427 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:36.520120   63427 fix.go:54] fixHost starting: 
	I1009 20:17:36.520550   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:36.520590   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:36.541113   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1009 20:17:36.541505   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:36.542133   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:17:36.542161   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:36.542522   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:36.542701   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:36.542849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:17:36.544749   63427 fix.go:112] recreateIfNeeded on no-preload-480205: state=Stopped err=<nil>
	I1009 20:17:36.544774   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	W1009 20:17:36.544962   63427 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:36.546948   63427 out.go:177] * Restarting existing kvm2 VM for "no-preload-480205" ...
	I1009 20:17:34.373083   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.373497   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:35.056703   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057338   64287 main.go:141] libmachine: (old-k8s-version-169021) Found IP for machine: 192.168.61.119
	I1009 20:17:35.057370   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has current primary IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057378   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserving static IP address...
	I1009 20:17:35.057996   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.058019   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserved static IP address: 192.168.61.119
	I1009 20:17:35.058036   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | skip adding static IP to network mk-old-k8s-version-169021 - found existing host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"}
	I1009 20:17:35.058052   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Getting to WaitForSSH function...
	I1009 20:17:35.058069   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting for SSH to be available...
	I1009 20:17:35.060324   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060560   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.060586   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060678   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH client type: external
	I1009 20:17:35.060702   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa (-rw-------)
	I1009 20:17:35.060735   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:35.060750   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | About to run SSH command:
	I1009 20:17:35.060766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | exit 0
	I1009 20:17:35.183369   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:35.183732   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:17:35.184294   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.186404   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186691   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.186728   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186912   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:17:35.187139   64287 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:35.187158   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:35.187361   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.189504   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189784   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.189814   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189904   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.190057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190169   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190309   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.190422   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.190610   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.190626   64287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:35.295510   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:35.295543   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295782   64287 buildroot.go:166] provisioning hostname "old-k8s-version-169021"
	I1009 20:17:35.295804   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295994   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.298548   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.298930   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.298964   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.299120   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.299266   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299418   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299547   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.299737   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.299899   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.299912   64287 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169021 && echo "old-k8s-version-169021" | sudo tee /etc/hostname
	I1009 20:17:35.426217   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169021
	
	I1009 20:17:35.426246   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.428993   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.429348   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429554   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.429728   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.429885   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.430012   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.430164   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.430365   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.430391   64287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:35.544070   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:35.544098   64287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:35.544136   64287 buildroot.go:174] setting up certificates
	I1009 20:17:35.544146   64287 provision.go:84] configureAuth start
	I1009 20:17:35.544155   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.544420   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.547109   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547419   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.547451   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547618   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.549441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549724   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.549757   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549894   64287 provision.go:143] copyHostCerts
	I1009 20:17:35.549945   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:35.549955   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:35.550007   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:35.550109   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:35.550119   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:35.550139   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:35.550201   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:35.550207   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:35.550224   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:35.550274   64287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169021 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-169021]
	I1009 20:17:35.892413   64287 provision.go:177] copyRemoteCerts
	I1009 20:17:35.892470   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:35.892492   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.894921   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895231   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.895262   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895409   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.895585   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.895750   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.895870   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:35.978537   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:36.003667   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:17:36.029724   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:36.053321   64287 provision.go:87] duration metric: took 509.163583ms to configureAuth
	I1009 20:17:36.053347   64287 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:36.053517   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:17:36.053589   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.056411   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.056740   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.056769   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.057023   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.057214   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057396   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057533   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.057684   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.057847   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.057862   64287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:36.281284   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:36.281316   64287 machine.go:96] duration metric: took 1.094164441s to provisionDockerMachine
	I1009 20:17:36.281327   64287 start.go:293] postStartSetup for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:17:36.281339   64287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:36.281386   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.281686   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:36.281711   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.284445   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.284825   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284990   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.285132   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.285255   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.285405   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.370146   64287 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:36.374951   64287 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:36.374972   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:36.375040   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:36.375158   64287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:36.375286   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:36.384857   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:36.407811   64287 start.go:296] duration metric: took 126.472907ms for postStartSetup
	I1009 20:17:36.407852   64287 fix.go:56] duration metric: took 23.68797707s for fixHost
	I1009 20:17:36.407875   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.410584   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.410949   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.410979   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.411118   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.411292   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411461   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411593   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.411768   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.411943   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.411966   64287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:36.519849   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505056.472929841
	
	I1009 20:17:36.519877   64287 fix.go:216] guest clock: 1728505056.472929841
	I1009 20:17:36.519887   64287 fix.go:229] Guest: 2024-10-09 20:17:36.472929841 +0000 UTC Remote: 2024-10-09 20:17:36.407856716 +0000 UTC m=+231.827419064 (delta=65.073125ms)
	I1009 20:17:36.519944   64287 fix.go:200] guest clock delta is within tolerance: 65.073125ms
	I1009 20:17:36.519956   64287 start.go:83] releasing machines lock for "old-k8s-version-169021", held for 23.800110205s
	I1009 20:17:36.520000   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.520321   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:36.523296   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523653   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.523701   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523890   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524453   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524658   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524781   64287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:36.524822   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.524855   64287 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:36.524883   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.527948   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528030   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528336   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528362   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528389   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528414   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528670   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528681   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528874   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.528880   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.529031   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529035   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529170   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.529191   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.634262   64287 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:36.640126   64287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:36.794481   64287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:36.801536   64287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:36.801615   64287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:36.825211   64287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:36.825237   64287 start.go:495] detecting cgroup driver to use...
	I1009 20:17:36.825299   64287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:36.842016   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:36.861052   64287 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:36.861112   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:36.878185   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:36.892044   64287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:37.010989   64287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:37.181313   64287 docker.go:233] disabling docker service ...
	I1009 20:17:37.181373   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:37.201726   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:37.218403   64287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:37.330869   64287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:37.458670   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:37.474832   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:37.496062   64287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:17:37.496111   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.509926   64287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:37.509984   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.527671   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.543857   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.554871   64287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:37.566057   64287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:37.578675   64287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:37.578757   64287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:37.593475   64287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:37.608210   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:37.756273   64287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:37.857693   64287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:37.857759   64287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:37.863522   64287 start.go:563] Will wait 60s for crictl version
	I1009 20:17:37.863561   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:37.868216   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:37.908445   64287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:37.908519   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.939400   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.971447   64287 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:17:36.548231   63427 main.go:141] libmachine: (no-preload-480205) Calling .Start
	I1009 20:17:36.548387   63427 main.go:141] libmachine: (no-preload-480205) Ensuring networks are active...
	I1009 20:17:36.549099   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network default is active
	I1009 20:17:36.549384   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network mk-no-preload-480205 is active
	I1009 20:17:36.549760   63427 main.go:141] libmachine: (no-preload-480205) Getting domain xml...
	I1009 20:17:36.550533   63427 main.go:141] libmachine: (no-preload-480205) Creating domain...
	I1009 20:17:37.839932   63427 main.go:141] libmachine: (no-preload-480205) Waiting to get IP...
	I1009 20:17:37.840843   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:37.841295   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:37.841405   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:37.841286   65353 retry.go:31] will retry after 306.803832ms: waiting for machine to come up
	I1009 20:17:33.581531   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.080661   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:38.083154   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:37.972687   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:37.975928   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976352   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:37.976382   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976637   64287 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:37.980809   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:37.993206   64287 kubeadm.go:883] updating cluster {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:37.993359   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:17:37.993402   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:38.043755   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:38.043813   64287 ssh_runner.go:195] Run: which lz4
	I1009 20:17:38.048189   64287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:38.052553   64287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:38.052584   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:17:38.374526   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.376238   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.874242   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:40.874269   64109 pod_ready.go:82] duration metric: took 13.007861108s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:40.874282   64109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:38.149878   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.150291   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.150317   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.150240   65353 retry.go:31] will retry after 331.657929ms: waiting for machine to come up
	I1009 20:17:38.483773   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.484236   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.484259   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.484184   65353 retry.go:31] will retry after 320.466882ms: waiting for machine to come up
	I1009 20:17:38.806862   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.807342   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.807370   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.807304   65353 retry.go:31] will retry after 515.558491ms: waiting for machine to come up
	I1009 20:17:39.324105   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:39.324656   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:39.324687   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:39.324624   65353 retry.go:31] will retry after 742.624052ms: waiting for machine to come up
	I1009 20:17:40.068871   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.069333   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.069361   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.069242   65353 retry.go:31] will retry after 627.591329ms: waiting for machine to come up
	I1009 20:17:40.698046   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.698539   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.698590   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.698482   65353 retry.go:31] will retry after 1.099340902s: waiting for machine to come up
	I1009 20:17:41.799879   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:41.800304   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:41.800334   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:41.800260   65353 retry.go:31] will retry after 954.068599ms: waiting for machine to come up
	I1009 20:17:42.756258   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:42.756730   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:42.756756   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:42.756692   65353 retry.go:31] will retry after 1.483165135s: waiting for machine to come up
	I1009 20:17:40.581834   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:42.583105   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:39.710338   64287 crio.go:462] duration metric: took 1.662187364s to copy over tarball
	I1009 20:17:39.710411   64287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:42.694067   64287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.983621241s)
	I1009 20:17:42.694097   64287 crio.go:469] duration metric: took 2.98372831s to extract the tarball
	I1009 20:17:42.694106   64287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:42.739749   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:42.782349   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:42.782374   64287 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:42.782447   64287 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.782474   64287 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.782512   64287 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.782544   64287 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:17:42.782549   64287 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.782732   64287 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.782486   64287 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.782788   64287 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.784992   64287 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.785024   64287 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.784995   64287 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.785000   64287 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.785007   64287 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.785070   64287 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:17:42.785030   64287 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.785471   64287 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.936283   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.937808   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.960488   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.971814   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:17:42.977796   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.004153   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.014701   64287 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:17:43.014748   64287 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.014795   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.025133   64287 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:17:43.025170   64287 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.025204   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086484   64287 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:17:43.086512   64287 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:17:43.086532   64287 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.086541   64287 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:17:43.086579   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086581   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.097814   64287 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:17:43.097859   64287 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.097909   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103497   64287 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:17:43.103532   64287 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.103548   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.103569   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103677   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.103745   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.103799   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.105640   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.203854   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.220635   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.220670   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.220793   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.232794   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.232901   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.232905   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.389992   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.390038   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.389991   64287 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:17:43.390081   64287 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.390097   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.390112   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.390166   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.390187   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.390247   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.475244   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:17:43.536485   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:17:43.536569   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.538738   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:17:43.538812   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:17:43.538863   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:17:43.538880   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.597357   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:17:43.597449   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.630702   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.668841   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:17:44.007657   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:44.151174   64287 cache_images.go:92] duration metric: took 1.368780539s to LoadCachedImages
	W1009 20:17:44.151263   64287 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1009 20:17:44.151285   64287 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1009 20:17:44.151432   64287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-169021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:44.151500   64287 ssh_runner.go:195] Run: crio config
	I1009 20:17:44.208126   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:17:44.208148   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:44.208165   64287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:44.208183   64287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169021 NodeName:old-k8s-version-169021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:17:44.208361   64287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-169021"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:44.208437   64287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:17:44.218743   64287 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:44.218813   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:44.228160   64287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 20:17:44.245304   64287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:44.262787   64287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:17:44.280742   64287 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:44.285502   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:44.299434   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:44.427216   64287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:44.445239   64287 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021 for IP: 192.168.61.119
	I1009 20:17:44.445262   64287 certs.go:194] generating shared ca certs ...
	I1009 20:17:44.445282   64287 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:44.445454   64287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:44.445516   64287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:44.445538   64287 certs.go:256] generating profile certs ...
	I1009 20:17:44.445663   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key
	I1009 20:17:44.445728   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192
	I1009 20:17:44.445780   64287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key
	I1009 20:17:44.445920   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:44.445961   64287 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:44.445976   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:44.446008   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:44.446041   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:44.446074   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:44.446130   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:44.446993   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:44.498205   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:44.525945   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:44.572216   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:44.614281   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:17:42.881058   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:45.654206   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.242356   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:44.242846   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:44.242873   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:44.242792   65353 retry.go:31] will retry after 1.589482004s: waiting for machine to come up
	I1009 20:17:45.834679   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:45.835135   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:45.835176   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:45.835093   65353 retry.go:31] will retry after 1.757206304s: waiting for machine to come up
	I1009 20:17:47.593468   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:47.593954   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:47.593987   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:47.593889   65353 retry.go:31] will retry after 2.938063418s: waiting for machine to come up
	I1009 20:17:45.082377   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:47.581271   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.661644   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:44.695246   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:44.719043   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:44.743825   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:44.768013   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:44.793698   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:44.819945   64287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:44.840340   64287 ssh_runner.go:195] Run: openssl version
	I1009 20:17:44.847883   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:44.858853   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863657   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863707   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.871190   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:44.885414   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:44.900030   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904894   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904958   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.912406   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:44.925128   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:44.936358   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940937   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940995   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.946995   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:44.958154   64287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:44.962846   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:44.968749   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:44.974659   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:44.980867   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:44.986827   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:44.992741   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:44.998932   64287 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:44.999030   64287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:44.999107   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.037766   64287 cri.go:89] found id: ""
	I1009 20:17:45.037847   64287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:45.050640   64287 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:45.050661   64287 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:45.050717   64287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:45.061420   64287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:45.062835   64287 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:17:45.063886   64287 kubeconfig.go:62] /home/jenkins/minikube-integration/19780-9412/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169021" cluster setting kubeconfig missing "old-k8s-version-169021" context setting]
	I1009 20:17:45.065224   64287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:45.137319   64287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:45.149285   64287 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1009 20:17:45.149318   64287 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:45.149331   64287 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:45.149386   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.191415   64287 cri.go:89] found id: ""
	I1009 20:17:45.191494   64287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:45.208982   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:45.219143   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:45.219166   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:45.219219   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:17:45.229113   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:45.229199   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:45.239745   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:17:45.249766   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:45.249844   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:45.260185   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.271441   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:45.271500   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.281343   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:17:45.291026   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:45.291094   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:45.301052   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:45.311369   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:45.520151   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.097892   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.359594   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.466328   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.574255   64287 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:46.574365   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.574634   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.074595   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.575187   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.074428   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.880869   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:49.881585   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.381306   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.535997   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:50.536376   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:50.536400   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:50.536340   65353 retry.go:31] will retry after 3.744305095s: waiting for machine to come up
	I1009 20:17:49.581868   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.080469   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:50.575160   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.075457   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.574838   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.075036   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.075071   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.575204   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.074552   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.574415   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.284206   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.284770   63427 main.go:141] libmachine: (no-preload-480205) Found IP for machine: 192.168.39.162
	I1009 20:17:54.284795   63427 main.go:141] libmachine: (no-preload-480205) Reserving static IP address...
	I1009 20:17:54.284809   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has current primary IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.285276   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.285315   63427 main.go:141] libmachine: (no-preload-480205) DBG | skip adding static IP to network mk-no-preload-480205 - found existing host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"}
	I1009 20:17:54.285330   63427 main.go:141] libmachine: (no-preload-480205) Reserved static IP address: 192.168.39.162
	I1009 20:17:54.285344   63427 main.go:141] libmachine: (no-preload-480205) Waiting for SSH to be available...
	I1009 20:17:54.285356   63427 main.go:141] libmachine: (no-preload-480205) DBG | Getting to WaitForSSH function...
	I1009 20:17:54.287561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287809   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.287838   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287920   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH client type: external
	I1009 20:17:54.287947   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa (-rw-------)
	I1009 20:17:54.287988   63427 main.go:141] libmachine: (no-preload-480205) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:54.288001   63427 main.go:141] libmachine: (no-preload-480205) DBG | About to run SSH command:
	I1009 20:17:54.288014   63427 main.go:141] libmachine: (no-preload-480205) DBG | exit 0
	I1009 20:17:54.414835   63427 main.go:141] libmachine: (no-preload-480205) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:54.415251   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetConfigRaw
	I1009 20:17:54.415965   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.418617   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.418968   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.418992   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.419252   63427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/config.json ...
	I1009 20:17:54.419452   63427 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:54.419470   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:54.419664   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.421796   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422088   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.422120   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422233   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.422406   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422550   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422839   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.423013   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.423242   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.423254   63427 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:54.531462   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:54.531497   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531718   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:17:54.531744   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531956   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.534433   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534788   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.534816   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.535138   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535286   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535418   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.535601   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.535774   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.535785   63427 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-480205 && echo "no-preload-480205" | sudo tee /etc/hostname
	I1009 20:17:54.659155   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-480205
	
	I1009 20:17:54.659228   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.661958   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662288   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.662313   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662511   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.662681   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662842   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662987   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.663179   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.663354   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.663370   63427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-480205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-480205/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-480205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:54.779856   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:54.779881   63427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:54.779916   63427 buildroot.go:174] setting up certificates
	I1009 20:17:54.779926   63427 provision.go:84] configureAuth start
	I1009 20:17:54.779935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.780180   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.782673   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783013   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.783045   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783171   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.785450   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785780   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.785807   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785945   63427 provision.go:143] copyHostCerts
	I1009 20:17:54.786024   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:54.786041   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:54.786107   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:54.786282   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:54.786294   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:54.786327   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:54.786402   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:54.786412   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:54.786439   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:54.786503   63427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.no-preload-480205 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-480205]
	I1009 20:17:54.929212   63427 provision.go:177] copyRemoteCerts
	I1009 20:17:54.929265   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:54.929292   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.931970   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932355   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.932402   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932506   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.932693   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.932849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.932979   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.017690   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:55.042746   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:17:55.066760   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:55.094790   63427 provision.go:87] duration metric: took 314.853512ms to configureAuth
	I1009 20:17:55.094830   63427 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:55.095022   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:55.095125   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.097730   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098041   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.098078   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098257   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.098452   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098647   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098764   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.098926   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.099111   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.099129   63427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:55.325505   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:55.325552   63427 machine.go:96] duration metric: took 906.085773ms to provisionDockerMachine
	I1009 20:17:55.325565   63427 start.go:293] postStartSetup for "no-preload-480205" (driver="kvm2")
	I1009 20:17:55.325576   63427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:55.325596   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.325884   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:55.325911   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.328326   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328595   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.328622   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.328920   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.329086   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.329197   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.413322   63427 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:55.417428   63427 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:55.417451   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:55.417531   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:55.417634   63427 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:55.417758   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:55.426893   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:55.451335   63427 start.go:296] duration metric: took 125.757549ms for postStartSetup
	I1009 20:17:55.451372   63427 fix.go:56] duration metric: took 18.931252408s for fixHost
	I1009 20:17:55.451395   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.453854   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454177   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.454222   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454403   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.454581   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454734   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454872   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.455026   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.455241   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.455254   63427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:55.564201   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505075.515960663
	
	I1009 20:17:55.564224   63427 fix.go:216] guest clock: 1728505075.515960663
	I1009 20:17:55.564232   63427 fix.go:229] Guest: 2024-10-09 20:17:55.515960663 +0000 UTC Remote: 2024-10-09 20:17:55.451376872 +0000 UTC m=+362.436821917 (delta=64.583791ms)
	I1009 20:17:55.564249   63427 fix.go:200] guest clock delta is within tolerance: 64.583791ms
	I1009 20:17:55.564254   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 19.044164758s
	I1009 20:17:55.564274   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.564496   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:55.567139   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567524   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.567561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567654   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568134   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568307   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568372   63427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:55.568415   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.568499   63427 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:55.568524   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.571019   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571293   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571450   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571475   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571592   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571724   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571746   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.571897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571898   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572039   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.572048   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.572151   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572272   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.651437   63427 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:55.678289   63427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:55.826507   63427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:55.832338   63427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:55.832394   63427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:55.849232   63427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:55.849252   63427 start.go:495] detecting cgroup driver to use...
	I1009 20:17:55.849312   63427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:55.865490   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:55.880814   63427 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:55.880881   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:55.895380   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:55.911341   63427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:56.029690   63427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:56.206998   63427 docker.go:233] disabling docker service ...
	I1009 20:17:56.207078   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:56.223617   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:56.236949   63427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:56.357461   63427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:56.472412   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:56.486622   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:56.505189   63427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:56.505273   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.515661   63427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:56.515714   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.525699   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.535795   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.545864   63427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:56.555956   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.565864   63427 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.584950   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.596337   63427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:56.605878   63427 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:56.605945   63427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:56.618105   63427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:56.627474   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:56.763925   63427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:56.866705   63427 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:56.866766   63427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:56.871946   63427 start.go:563] Will wait 60s for crictl version
	I1009 20:17:56.871990   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:56.875978   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:56.920375   63427 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:56.920497   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.950584   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.983562   63427 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:54.883016   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:57.380454   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.984723   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:56.987544   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.987870   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:56.987896   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.988102   63427 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:56.992229   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:57.005052   63427 kubeadm.go:883] updating cluster {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:57.005203   63427 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:57.005261   63427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:57.048383   63427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:57.048405   63427 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:57.048449   63427 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.048493   63427 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.048528   63427 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.048551   63427 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1009 20:17:57.048554   63427 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.048460   63427 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.048669   63427 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.048543   63427 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049897   63427 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.049914   63427 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049917   63427 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.049899   63427 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.049966   63427 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.049968   63427 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1009 20:17:57.210906   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.216003   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.221539   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.238277   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.249962   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.251926   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.264094   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1009 20:17:57.278956   63427 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1009 20:17:57.279003   63427 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.279053   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.326574   63427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1009 20:17:57.326623   63427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.326667   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.356980   63427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1009 20:17:57.356999   63427 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1009 20:17:57.357024   63427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.357028   63427 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.357079   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.357082   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394166   63427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1009 20:17:57.394211   63427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.394308   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394202   63427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1009 20:17:57.394363   63427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.394409   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.504627   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.504669   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.504677   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.504795   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.504866   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.504808   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.653815   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.653864   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.653922   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.653938   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.653976   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.654008   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798466   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798526   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.798603   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.798638   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.798712   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.798725   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.919528   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1009 20:17:57.919602   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1009 20:17:57.919636   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.919668   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:17:57.923759   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1009 20:17:57.923835   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1009 20:17:57.923861   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1009 20:17:57.923841   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:17:57.923900   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:17:57.923908   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1009 20:17:57.923937   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:17:57.923979   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:17:57.933344   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1009 20:17:57.933364   63427 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.933384   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1009 20:17:57.933397   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.936970   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1009 20:17:57.937013   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1009 20:17:57.937014   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1009 20:17:57.937039   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1009 20:17:54.082018   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.581605   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:55.074932   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:55.575354   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.074536   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.575341   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.074580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.574737   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.074743   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.574712   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.074570   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.575178   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.381986   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.879741   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:58.234930   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.729993   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.796562811s)
	I1009 20:18:01.730032   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1009 20:18:01.730055   63427 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730053   63427 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.495090196s)
	I1009 20:18:01.730094   63427 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1009 20:18:01.730108   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730128   63427 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.730171   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:59.082693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.581215   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:00.075413   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:00.575344   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.074463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.574495   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.075077   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.074427   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.574544   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.075436   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.575477   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.881048   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.881675   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:03.709225   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.979095477s)
	I1009 20:18:03.709263   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1009 20:18:03.709270   63427 ssh_runner.go:235] Completed: which crictl: (1.979078895s)
	I1009 20:18:03.709293   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709328   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709331   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677348   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.967992224s)
	I1009 20:18:05.677442   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677451   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.968100259s)
	I1009 20:18:05.677472   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1009 20:18:05.677506   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.677576   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.717053   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:07.172029   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.454939952s)
	I1009 20:18:07.172088   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 20:18:07.172034   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.49443869s)
	I1009 20:18:07.172161   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1009 20:18:07.172184   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:07.172184   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:07.172274   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:03.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:06.082185   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.075031   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:05.574523   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.075121   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.575359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.074417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.574532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.075315   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.575052   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.075089   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.575013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.881820   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:09.882824   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:12.381749   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:08.827862   63427 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.655655014s)
	I1009 20:18:08.827897   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.655597185s)
	I1009 20:18:08.827906   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1009 20:18:08.827911   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1009 20:18:08.827943   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:08.828002   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:11.127762   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.299736339s)
	I1009 20:18:11.127795   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1009 20:18:11.127828   63427 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.127896   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.778998   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 20:18:11.779046   63427 cache_images.go:123] Successfully loaded all cached images
	I1009 20:18:11.779052   63427 cache_images.go:92] duration metric: took 14.730635989s to LoadCachedImages
	I1009 20:18:11.779086   63427 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.1 crio true true} ...
	I1009 20:18:11.779200   63427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-480205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:18:11.779290   63427 ssh_runner.go:195] Run: crio config
	I1009 20:18:11.823810   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:11.823835   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:11.823850   63427 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:18:11.823868   63427 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-480205 NodeName:no-preload-480205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:18:11.823998   63427 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-480205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:18:11.824053   63427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:18:11.834380   63427 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:18:11.834447   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:18:11.843217   63427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:18:11.860171   63427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:18:11.877082   63427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1009 20:18:11.894719   63427 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1009 20:18:11.898508   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:11.910913   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:12.036793   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:12.054850   63427 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205 for IP: 192.168.39.162
	I1009 20:18:12.054872   63427 certs.go:194] generating shared ca certs ...
	I1009 20:18:12.054891   63427 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:12.055079   63427 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:18:12.055135   63427 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:18:12.055147   63427 certs.go:256] generating profile certs ...
	I1009 20:18:12.055233   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.key
	I1009 20:18:12.055290   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key.d4bac337
	I1009 20:18:12.055346   63427 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key
	I1009 20:18:12.055484   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:18:12.055518   63427 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:18:12.055531   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:18:12.055563   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:18:12.055589   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:18:12.055622   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:18:12.055685   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:18:12.056362   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:18:12.098363   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:18:12.138215   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:18:12.163505   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:18:12.197000   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:18:12.226922   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:18:12.260018   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:18:12.283078   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:18:12.306681   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:18:12.329290   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:12.351909   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:18:12.374738   63427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:12.392628   63427 ssh_runner.go:195] Run: openssl version
	I1009 20:18:12.398243   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:18:12.408796   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413145   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413227   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.419056   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:12.429807   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:12.440638   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445248   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445304   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.450971   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:12.461763   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:18:12.472078   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476832   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476883   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.482732   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:12.493739   63427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:12.498128   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:18:12.504533   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:18:12.510838   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:18:12.517106   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:18:12.522836   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:18:12.528387   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:18:12.533860   63427 kubeadm.go:392] StartCluster: {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:12.533939   63427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:12.533974   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.573392   63427 cri.go:89] found id: ""
	I1009 20:18:12.573459   63427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:12.584594   63427 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:18:12.584615   63427 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:18:12.584660   63427 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:18:12.595656   63427 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:18:12.596797   63427 kubeconfig.go:125] found "no-preload-480205" server: "https://192.168.39.162:8443"
	I1009 20:18:12.598877   63427 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:18:12.608274   63427 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1009 20:18:12.608299   63427 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:18:12.608310   63427 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:18:12.608369   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.644925   63427 cri.go:89] found id: ""
	I1009 20:18:12.644992   63427 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:18:12.661468   63427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:18:12.671087   63427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:18:12.671107   63427 kubeadm.go:157] found existing configuration files:
	
	I1009 20:18:12.671152   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:18:12.679852   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:18:12.679915   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:18:12.688829   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:18:12.697279   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:18:12.697334   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:18:12.705785   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.714620   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:18:12.714657   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.722966   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:18:12.730999   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:18:12.731047   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:18:12.739970   63427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:18:12.748980   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:12.857890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:08.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:11.081976   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:10.075093   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.574417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.075214   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.574669   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.075388   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.575377   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.075087   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.574793   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.074494   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.574845   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.880777   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:17.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:13.727010   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:13.942433   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.021021   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.144829   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:18:14.144918   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.645875   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.145872   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.184998   63427 api_server.go:72] duration metric: took 1.040165861s to wait for apiserver process to appear ...
	I1009 20:18:15.185034   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:18:15.185059   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:15.185680   63427 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I1009 20:18:15.685984   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:13.581243   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:16.079884   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:18.081998   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:15.074778   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.575349   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.074510   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.074650   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.574725   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.075359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.575302   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.074611   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.575097   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.286022   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.286048   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.286066   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.311734   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.311764   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.685256   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.689903   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:18.689930   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.185432   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.191636   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:19.191661   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.685910   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.690518   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:18:19.696742   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:18:19.696769   63427 api_server.go:131] duration metric: took 4.511726583s to wait for apiserver health ...
	I1009 20:18:19.696777   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:19.696783   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:19.698684   63427 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:18:19.700003   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:18:19.712555   63427 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:18:19.731708   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:18:19.740770   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:18:19.740800   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:19.740808   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:19.740817   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:19.740823   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:19.740829   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:18:19.740835   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:19.740842   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:18:19.740848   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:18:19.740860   63427 system_pods.go:74] duration metric: took 9.132657ms to wait for pod list to return data ...
	I1009 20:18:19.740867   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:18:19.744292   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:18:19.744314   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:18:19.744329   63427 node_conditions.go:105] duration metric: took 3.45695ms to run NodePressure ...
	I1009 20:18:19.744346   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:20.036577   63427 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040661   63427 kubeadm.go:739] kubelet initialised
	I1009 20:18:20.040683   63427 kubeadm.go:740] duration metric: took 4.08281ms waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040692   63427 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:20.047699   63427 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.052483   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052504   63427 pod_ready.go:82] duration metric: took 4.782367ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.052511   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052518   63427 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.056863   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056892   63427 pod_ready.go:82] duration metric: took 4.363688ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.056903   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056911   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.061762   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061786   63427 pod_ready.go:82] duration metric: took 4.867975ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.061796   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061804   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.135742   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135769   63427 pod_ready.go:82] duration metric: took 73.952718ms for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.135779   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135785   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.534419   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534449   63427 pod_ready.go:82] duration metric: took 398.656543ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.534459   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534466   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.935390   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935416   63427 pod_ready.go:82] duration metric: took 400.943577ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.935426   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935432   63427 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:21.336052   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336081   63427 pod_ready.go:82] duration metric: took 400.640044ms for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:21.336093   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336102   63427 pod_ready.go:39] duration metric: took 1.295400779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:21.336122   63427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:18:21.349596   63427 ops.go:34] apiserver oom_adj: -16
	I1009 20:18:21.349616   63427 kubeadm.go:597] duration metric: took 8.764995466s to restartPrimaryControlPlane
	I1009 20:18:21.349624   63427 kubeadm.go:394] duration metric: took 8.815768617s to StartCluster
	I1009 20:18:21.349639   63427 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.349716   63427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:18:21.351335   63427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.351607   63427 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:21.351692   63427 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:21.351813   63427 addons.go:69] Setting storage-provisioner=true in profile "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting metrics-server=true in profile "no-preload-480205"
	I1009 20:18:21.351832   63427 addons.go:234] Setting addon storage-provisioner=true in "no-preload-480205"
	I1009 20:18:21.351836   63427 addons.go:234] Setting addon metrics-server=true in "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting default-storageclass=true in profile "no-preload-480205"
	I1009 20:18:21.351845   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:18:21.351883   63427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-480205"
	W1009 20:18:21.351840   63427 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:18:21.351986   63427 host.go:66] Checking if "no-preload-480205" exists ...
	W1009 20:18:21.351843   63427 addons.go:243] addon metrics-server should already be in state true
	I1009 20:18:21.352071   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.352345   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352389   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352398   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352424   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352457   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352489   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.353957   63427 out.go:177] * Verifying Kubernetes components...
	I1009 20:18:21.355218   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:21.371429   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I1009 20:18:21.371808   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.372342   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.372372   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.372777   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.372988   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.376878   63427 addons.go:234] Setting addon default-storageclass=true in "no-preload-480205"
	W1009 20:18:21.376899   63427 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:18:21.376926   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.377284   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.377323   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.390054   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I1009 20:18:21.390616   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I1009 20:18:21.391127   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391270   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391803   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.391830   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392008   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.392033   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392208   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392359   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392734   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.392776   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.392957   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.393001   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.397090   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1009 20:18:21.397605   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.398086   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.398105   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.398405   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.398921   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.398966   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.408719   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1009 20:18:21.408929   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1009 20:18:21.409048   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409326   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409582   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409594   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409876   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409893   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409956   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410100   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.410223   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410564   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.412097   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.412300   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.414239   63427 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:21.414326   63427 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:18:19.381608   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.415507   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:18:21.415525   63427 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.415530   63427 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:18:21.415536   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:21.415548   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.415549   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.417045   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I1009 20:18:21.417788   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.418610   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.418626   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.418981   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419016   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.419279   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.419611   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.419631   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419760   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.419897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.420028   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.420123   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.420454   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420758   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.420943   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.420963   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420969   63427 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.420989   63427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:21.421002   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.421193   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.421373   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.421545   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.421675   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.423520   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425058   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.425099   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.425124   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425247   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.425381   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.425511   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.558337   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:21.587934   63427 node_ready.go:35] waiting up to 6m0s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:21.692866   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.705177   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:18:21.705201   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:18:21.724872   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.796761   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:18:21.796789   63427 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:18:21.846162   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:21.846187   63427 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:18:21.880785   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:22.146852   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.146879   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147190   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147241   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147254   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.147266   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.147280   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147532   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147534   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147591   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.161873   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.161893   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.162134   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.162156   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.162162   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966531   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24162682s)
	I1009 20:18:22.966588   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966603   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966536   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.085706223s)
	I1009 20:18:22.966699   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966712   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966892   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.966932   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.966939   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966947   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966954   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967001   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967020   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967040   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967073   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.967086   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967234   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967258   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967332   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967342   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967356   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967365   63427 addons.go:475] Verifying addon metrics-server=true in "no-preload-480205"
	I1009 20:18:22.969240   63427 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1009 20:18:22.970479   63427 addons.go:510] duration metric: took 1.618800365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1009 20:18:20.580980   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:22.581407   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:20.075155   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:20.575362   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.074859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.574637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.074532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.574916   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.075357   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.574640   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.074579   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.574711   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.879983   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:26.380696   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:23.592071   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:26.091763   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:24.581861   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:27.082730   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:25.075032   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:25.575412   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.075470   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.574434   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.074827   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.074653   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.575222   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.075440   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.575192   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.880597   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:28.592011   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:29.091688   63427 node_ready.go:49] node "no-preload-480205" has status "Ready":"True"
	I1009 20:18:29.091710   63427 node_ready.go:38] duration metric: took 7.503746219s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:29.091719   63427 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:29.097050   63427 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101164   63427 pod_ready.go:93] pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.101185   63427 pod_ready.go:82] duration metric: took 4.107489ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101195   63427 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105318   63427 pod_ready.go:93] pod "etcd-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.105337   63427 pod_ready.go:82] duration metric: took 4.133854ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105348   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108895   63427 pod_ready.go:93] pod "kube-apiserver-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.108910   63427 pod_ready.go:82] duration metric: took 3.556306ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108920   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.114777   63427 pod_ready.go:103] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.615669   63427 pod_ready.go:93] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.615692   63427 pod_ready.go:82] duration metric: took 2.506765342s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.615703   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620649   63427 pod_ready.go:93] pod "kube-proxy-vbpbk" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.620670   63427 pod_ready.go:82] duration metric: took 4.959968ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620682   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892060   63427 pod_ready.go:93] pod "kube-scheduler-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.892081   63427 pod_ready.go:82] duration metric: took 271.38787ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892089   63427 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.580683   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.581273   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.075304   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:30.574688   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.075159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.574404   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.074889   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.575136   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.074459   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.574779   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.074797   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.574832   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.380854   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.880599   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.899462   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.397489   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.582344   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.081582   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.074501   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:35.574403   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.075399   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.575034   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.074714   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.574446   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.074619   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.574644   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.074530   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.574700   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.881601   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.380041   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.380712   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.397848   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.398202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.400630   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.582883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:41.080905   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.074863   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:40.575174   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.075008   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.574859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.074972   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.574851   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.074805   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.575033   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.074718   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.575423   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.880876   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.881328   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:44.898897   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:47.399335   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:43.581383   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.081078   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:48.081422   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:45.074591   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:45.575195   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.075303   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.575186   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:46.575288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:46.614320   64287 cri.go:89] found id: ""
	I1009 20:18:46.614343   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.614351   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:46.614357   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:46.614402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:46.646355   64287 cri.go:89] found id: ""
	I1009 20:18:46.646384   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.646395   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:46.646403   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:46.646450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:46.678758   64287 cri.go:89] found id: ""
	I1009 20:18:46.678788   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.678798   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:46.678805   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:46.678859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:46.721469   64287 cri.go:89] found id: ""
	I1009 20:18:46.721496   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.721507   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:46.721514   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:46.721573   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:46.759822   64287 cri.go:89] found id: ""
	I1009 20:18:46.759853   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.759861   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:46.759866   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:46.759923   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:46.798221   64287 cri.go:89] found id: ""
	I1009 20:18:46.798250   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.798261   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:46.798268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:46.798327   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:46.832044   64287 cri.go:89] found id: ""
	I1009 20:18:46.832067   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.832075   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:46.832080   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:46.832143   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:46.865003   64287 cri.go:89] found id: ""
	I1009 20:18:46.865030   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.865041   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:46.865051   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:46.865066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:46.916927   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:46.916964   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:46.930547   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:46.930576   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:47.042476   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:47.042501   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:47.042516   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:47.116701   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:47.116732   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:48.888593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:51.380593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.899106   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:52.397825   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:50.580775   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:53.081256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.659335   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:49.672837   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:49.672906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:49.709722   64287 cri.go:89] found id: ""
	I1009 20:18:49.709750   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.709761   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:49.709769   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:49.709827   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:49.741187   64287 cri.go:89] found id: ""
	I1009 20:18:49.741209   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.741216   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:49.741221   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:49.741278   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:49.782564   64287 cri.go:89] found id: ""
	I1009 20:18:49.782593   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.782603   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:49.782610   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:49.782667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:49.820586   64287 cri.go:89] found id: ""
	I1009 20:18:49.820618   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.820628   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:49.820634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:49.820688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:49.854573   64287 cri.go:89] found id: ""
	I1009 20:18:49.854600   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.854608   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:49.854615   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:49.854672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:49.889947   64287 cri.go:89] found id: ""
	I1009 20:18:49.889976   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.889986   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:49.889993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:49.890049   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:49.925309   64287 cri.go:89] found id: ""
	I1009 20:18:49.925339   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.925350   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:49.925357   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:49.925432   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:49.961993   64287 cri.go:89] found id: ""
	I1009 20:18:49.962019   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.962029   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:49.962039   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:49.962053   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:50.051610   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:50.051642   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:50.092363   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:50.092388   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:50.145606   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:50.145639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:50.160017   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:50.160047   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:50.231984   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:52.733040   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:52.748018   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:52.748075   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:52.789413   64287 cri.go:89] found id: ""
	I1009 20:18:52.789440   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.789452   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:52.789458   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:52.789514   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:52.823188   64287 cri.go:89] found id: ""
	I1009 20:18:52.823219   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.823229   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:52.823237   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:52.823305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:52.858675   64287 cri.go:89] found id: ""
	I1009 20:18:52.858704   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.858716   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:52.858724   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:52.858782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:52.893243   64287 cri.go:89] found id: ""
	I1009 20:18:52.893277   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.893287   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:52.893295   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:52.893363   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:52.928209   64287 cri.go:89] found id: ""
	I1009 20:18:52.928240   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.928248   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:52.928255   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:52.928314   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:52.962418   64287 cri.go:89] found id: ""
	I1009 20:18:52.962446   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.962455   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:52.962461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:52.962510   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:52.996276   64287 cri.go:89] found id: ""
	I1009 20:18:52.996304   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.996315   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:52.996322   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:52.996380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:53.029693   64287 cri.go:89] found id: ""
	I1009 20:18:53.029718   64287 logs.go:282] 0 containers: []
	W1009 20:18:53.029728   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:53.029738   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:53.029752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:53.042690   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:53.042713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:53.114114   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:53.114132   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:53.114143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:53.192280   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:53.192314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:53.230392   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:53.230416   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:53.380621   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.881245   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:54.399437   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:56.900141   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.580802   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:58.082285   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.781562   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:55.795951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:55.796017   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:55.836037   64287 cri.go:89] found id: ""
	I1009 20:18:55.836065   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.836074   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:55.836080   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:55.836126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:55.870534   64287 cri.go:89] found id: ""
	I1009 20:18:55.870564   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.870574   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:55.870580   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:55.870647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:55.906415   64287 cri.go:89] found id: ""
	I1009 20:18:55.906438   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.906447   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:55.906454   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:55.906507   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:55.943387   64287 cri.go:89] found id: ""
	I1009 20:18:55.943414   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.943424   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:55.943431   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:55.943489   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:55.977004   64287 cri.go:89] found id: ""
	I1009 20:18:55.977027   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.977036   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:55.977044   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:55.977120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:56.015608   64287 cri.go:89] found id: ""
	I1009 20:18:56.015634   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.015648   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:56.015654   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:56.015703   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:56.049324   64287 cri.go:89] found id: ""
	I1009 20:18:56.049355   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.049366   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:56.049375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:56.049428   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:56.084914   64287 cri.go:89] found id: ""
	I1009 20:18:56.084937   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.084946   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:56.084955   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:56.084975   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:56.098176   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:56.098197   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:56.178386   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:56.178403   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:56.178414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:56.256547   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:56.256582   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:56.294138   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:56.294170   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:58.851568   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:58.865845   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:58.865902   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:58.904144   64287 cri.go:89] found id: ""
	I1009 20:18:58.904169   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.904177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:58.904194   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:58.904267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:58.936739   64287 cri.go:89] found id: ""
	I1009 20:18:58.936769   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.936780   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:58.936790   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:58.936848   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:58.971592   64287 cri.go:89] found id: ""
	I1009 20:18:58.971623   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.971631   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:58.971638   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:58.971690   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:59.007176   64287 cri.go:89] found id: ""
	I1009 20:18:59.007205   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.007228   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:59.007234   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:59.007283   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:59.041760   64287 cri.go:89] found id: ""
	I1009 20:18:59.041789   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.041800   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:59.041807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:59.041865   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:59.077912   64287 cri.go:89] found id: ""
	I1009 20:18:59.077940   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.077951   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:59.077958   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:59.078014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:59.110669   64287 cri.go:89] found id: ""
	I1009 20:18:59.110701   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.110712   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:59.110720   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:59.110799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:59.144869   64287 cri.go:89] found id: ""
	I1009 20:18:59.144897   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.144907   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:59.144917   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:59.144952   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:59.229014   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:59.229054   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:59.272687   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:59.272725   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:59.328090   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:59.328123   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:59.342264   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:59.342294   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:59.419880   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:58.379973   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.381314   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.382266   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:59.398378   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.898047   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.581003   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.581660   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.920869   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:01.933620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:01.933685   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:01.967549   64287 cri.go:89] found id: ""
	I1009 20:19:01.967577   64287 logs.go:282] 0 containers: []
	W1009 20:19:01.967585   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:01.967590   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:01.967675   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:02.005465   64287 cri.go:89] found id: ""
	I1009 20:19:02.005491   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.005500   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:02.005505   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:02.005558   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:02.038140   64287 cri.go:89] found id: ""
	I1009 20:19:02.038162   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.038170   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:02.038176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:02.038219   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:02.070394   64287 cri.go:89] found id: ""
	I1009 20:19:02.070423   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.070434   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:02.070442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:02.070505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:02.110634   64287 cri.go:89] found id: ""
	I1009 20:19:02.110655   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.110663   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:02.110669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:02.110723   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:02.166408   64287 cri.go:89] found id: ""
	I1009 20:19:02.166445   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.166457   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:02.166467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:02.166541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:02.218816   64287 cri.go:89] found id: ""
	I1009 20:19:02.218846   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.218856   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:02.218862   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:02.218914   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:02.265090   64287 cri.go:89] found id: ""
	I1009 20:19:02.265118   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.265130   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:02.265140   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:02.265156   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:02.278134   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:02.278160   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:02.348422   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:02.348453   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:02.348467   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:02.429614   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:02.429651   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:02.469100   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:02.469132   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:04.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.881374   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:04.397774   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.402923   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.081386   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:07.580670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.020914   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:05.034760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:05.034833   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:05.071078   64287 cri.go:89] found id: ""
	I1009 20:19:05.071109   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.071120   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:05.071128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:05.071190   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:05.105517   64287 cri.go:89] found id: ""
	I1009 20:19:05.105545   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.105553   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:05.105558   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:05.105607   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:05.139601   64287 cri.go:89] found id: ""
	I1009 20:19:05.139624   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.139632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:05.139637   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:05.139682   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:05.174329   64287 cri.go:89] found id: ""
	I1009 20:19:05.174351   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.174359   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:05.174365   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:05.174410   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:05.212336   64287 cri.go:89] found id: ""
	I1009 20:19:05.212368   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.212377   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:05.212383   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:05.212464   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:05.251822   64287 cri.go:89] found id: ""
	I1009 20:19:05.251844   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.251851   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:05.251857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:05.251901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:05.291055   64287 cri.go:89] found id: ""
	I1009 20:19:05.291097   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.291106   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:05.291111   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:05.291160   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:05.327223   64287 cri.go:89] found id: ""
	I1009 20:19:05.327248   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.327256   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:05.327266   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:05.327281   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:05.377047   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:05.377086   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:05.391232   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:05.391263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:05.464815   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:05.464837   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:05.464850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:05.542581   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:05.542616   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:08.084504   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:08.100466   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:08.100535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:08.138451   64287 cri.go:89] found id: ""
	I1009 20:19:08.138481   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.138489   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:08.138494   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:08.138551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:08.176839   64287 cri.go:89] found id: ""
	I1009 20:19:08.176867   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.176877   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:08.176884   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:08.176941   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:08.234435   64287 cri.go:89] found id: ""
	I1009 20:19:08.234461   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.234472   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:08.234479   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:08.234544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:08.270727   64287 cri.go:89] found id: ""
	I1009 20:19:08.270753   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.270764   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:08.270771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:08.270831   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:08.305139   64287 cri.go:89] found id: ""
	I1009 20:19:08.305167   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.305177   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:08.305185   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:08.305237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:08.338153   64287 cri.go:89] found id: ""
	I1009 20:19:08.338197   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.338209   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:08.338217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:08.338272   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:08.376046   64287 cri.go:89] found id: ""
	I1009 20:19:08.376073   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.376081   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:08.376087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:08.376144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:08.416555   64287 cri.go:89] found id: ""
	I1009 20:19:08.416595   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.416606   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:08.416617   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:08.416630   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:08.470868   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:08.470898   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:08.486601   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:08.486623   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:08.563325   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:08.563363   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:08.563378   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:08.643743   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:08.643778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:09.380849   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.881773   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:08.898969   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.399277   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:09.580913   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.581693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.197637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:11.210992   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:11.211078   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:11.248309   64287 cri.go:89] found id: ""
	I1009 20:19:11.248331   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.248339   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:11.248345   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:11.248388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:11.282511   64287 cri.go:89] found id: ""
	I1009 20:19:11.282537   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.282546   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:11.282551   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:11.282603   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:11.319447   64287 cri.go:89] found id: ""
	I1009 20:19:11.319473   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.319480   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:11.319486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:11.319543   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:11.353838   64287 cri.go:89] found id: ""
	I1009 20:19:11.353866   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.353879   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:11.353887   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:11.353951   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:11.395257   64287 cri.go:89] found id: ""
	I1009 20:19:11.395288   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.395300   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:11.395309   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:11.395373   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:11.434406   64287 cri.go:89] found id: ""
	I1009 20:19:11.434430   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.434438   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:11.434445   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:11.434506   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:11.468162   64287 cri.go:89] found id: ""
	I1009 20:19:11.468184   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.468192   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:11.468197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:11.468252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:11.500214   64287 cri.go:89] found id: ""
	I1009 20:19:11.500247   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.500257   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:11.500267   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:11.500282   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:11.566430   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:11.566449   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:11.566463   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:11.642784   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:11.642815   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:11.680882   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:11.680908   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:11.731386   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:11.731414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.245696   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:14.258882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:14.258948   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:14.293339   64287 cri.go:89] found id: ""
	I1009 20:19:14.293365   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.293372   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:14.293379   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:14.293424   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:14.327246   64287 cri.go:89] found id: ""
	I1009 20:19:14.327268   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.327275   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:14.327287   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:14.327334   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:14.366384   64287 cri.go:89] found id: ""
	I1009 20:19:14.366412   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.366423   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:14.366430   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:14.366498   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:14.403913   64287 cri.go:89] found id: ""
	I1009 20:19:14.403950   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.403958   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:14.403965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:14.404021   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:14.442655   64287 cri.go:89] found id: ""
	I1009 20:19:14.442684   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.442694   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:14.442702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:14.442749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:14.477895   64287 cri.go:89] found id: ""
	I1009 20:19:14.477921   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.477928   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:14.477934   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:14.477979   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:14.512833   64287 cri.go:89] found id: ""
	I1009 20:19:14.512871   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.512882   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:14.512889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:14.512955   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:14.546557   64287 cri.go:89] found id: ""
	I1009 20:19:14.546582   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.546590   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:14.546597   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:14.546610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:14.599579   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:14.599610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.613347   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:14.613371   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:14.380816   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.879793   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.399353   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:15.899223   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.584162   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.081179   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:14.689272   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:14.689295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:14.689306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:14.770362   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:14.770394   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:17.312105   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:17.326851   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:17.326906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:17.364760   64287 cri.go:89] found id: ""
	I1009 20:19:17.364785   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.364793   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:17.364799   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:17.364851   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:17.398149   64287 cri.go:89] found id: ""
	I1009 20:19:17.398172   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.398181   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:17.398189   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:17.398247   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:17.432746   64287 cri.go:89] found id: ""
	I1009 20:19:17.432778   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.432789   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:17.432797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:17.432846   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:17.468095   64287 cri.go:89] found id: ""
	I1009 20:19:17.468125   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.468137   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:17.468145   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:17.468206   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:17.503152   64287 cri.go:89] found id: ""
	I1009 20:19:17.503184   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.503196   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:17.503203   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:17.503257   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:17.543966   64287 cri.go:89] found id: ""
	I1009 20:19:17.543993   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.544002   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:17.544008   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:17.544077   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:17.582780   64287 cri.go:89] found id: ""
	I1009 20:19:17.582801   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.582809   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:17.582814   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:17.582860   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:17.621907   64287 cri.go:89] found id: ""
	I1009 20:19:17.621933   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.621942   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:17.621951   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:17.621963   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:17.674239   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:17.674271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:17.688301   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:17.688331   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:17.759965   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:17.759989   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:17.760005   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:17.836052   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:17.836087   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:18.880033   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:21.381550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.399116   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.898441   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:22.899243   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.581486   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:23.081145   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.380237   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:20.393343   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:20.393409   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:20.427462   64287 cri.go:89] found id: ""
	I1009 20:19:20.427491   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.427501   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:20.427509   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:20.427560   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:20.463708   64287 cri.go:89] found id: ""
	I1009 20:19:20.463736   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.463747   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:20.463754   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:20.463818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:20.497898   64287 cri.go:89] found id: ""
	I1009 20:19:20.497924   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.497931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:20.497937   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:20.497985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:20.531880   64287 cri.go:89] found id: ""
	I1009 20:19:20.531910   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.531918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:20.531923   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:20.531971   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:20.565291   64287 cri.go:89] found id: ""
	I1009 20:19:20.565319   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.565330   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:20.565342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:20.565390   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:20.604786   64287 cri.go:89] found id: ""
	I1009 20:19:20.604815   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.604827   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:20.604835   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:20.604891   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:20.646136   64287 cri.go:89] found id: ""
	I1009 20:19:20.646161   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.646169   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:20.646175   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:20.646231   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:20.687503   64287 cri.go:89] found id: ""
	I1009 20:19:20.687527   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.687540   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:20.687548   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:20.687560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:20.738026   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:20.738057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:20.751432   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:20.751459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:20.826192   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:20.826219   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:20.826239   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:20.905874   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:20.905900   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.445277   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:23.460245   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:23.460305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:23.503559   64287 cri.go:89] found id: ""
	I1009 20:19:23.503582   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.503590   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:23.503596   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:23.503652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:23.542748   64287 cri.go:89] found id: ""
	I1009 20:19:23.542783   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.542791   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:23.542797   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:23.542857   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:23.585668   64287 cri.go:89] found id: ""
	I1009 20:19:23.585689   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.585696   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:23.585702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:23.585753   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:23.623863   64287 cri.go:89] found id: ""
	I1009 20:19:23.623884   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.623891   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:23.623897   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:23.623952   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:23.657025   64287 cri.go:89] found id: ""
	I1009 20:19:23.657049   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.657057   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:23.657063   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:23.657120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:23.692536   64287 cri.go:89] found id: ""
	I1009 20:19:23.692573   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.692583   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:23.692590   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:23.692657   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:23.732552   64287 cri.go:89] found id: ""
	I1009 20:19:23.732580   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.732591   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:23.732599   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:23.732645   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:23.767308   64287 cri.go:89] found id: ""
	I1009 20:19:23.767345   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.767356   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:23.767366   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:23.767380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:23.780909   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:23.780948   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:23.853312   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:23.853340   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:23.853355   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:23.934930   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:23.934968   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.977906   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:23.977943   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:23.881669   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.380447   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.397833   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.398843   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.082071   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.580992   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.530146   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:26.545527   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:26.545598   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:26.580942   64287 cri.go:89] found id: ""
	I1009 20:19:26.580970   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.580981   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:26.580988   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:26.581050   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:26.621165   64287 cri.go:89] found id: ""
	I1009 20:19:26.621188   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.621195   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:26.621201   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:26.621245   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:26.655664   64287 cri.go:89] found id: ""
	I1009 20:19:26.655690   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.655697   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:26.655703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:26.655749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:26.691951   64287 cri.go:89] found id: ""
	I1009 20:19:26.691973   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.691981   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:26.691987   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:26.692033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:26.728905   64287 cri.go:89] found id: ""
	I1009 20:19:26.728937   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.728948   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:26.728955   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:26.729013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:26.763673   64287 cri.go:89] found id: ""
	I1009 20:19:26.763697   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.763705   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:26.763711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:26.763765   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:26.798507   64287 cri.go:89] found id: ""
	I1009 20:19:26.798535   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.798547   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:26.798554   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:26.798615   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:26.836114   64287 cri.go:89] found id: ""
	I1009 20:19:26.836140   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.836148   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:26.836156   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:26.836169   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:26.914136   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:26.914160   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:26.914175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:26.995023   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:26.995055   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:27.033788   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:27.033817   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:27.084313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:27.084341   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.597899   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:29.611695   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:29.611756   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:28.381564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.881085   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.899697   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.398514   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.081670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.580939   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.646690   64287 cri.go:89] found id: ""
	I1009 20:19:29.646718   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.646726   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:29.646732   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:29.646780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:29.681379   64287 cri.go:89] found id: ""
	I1009 20:19:29.681408   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.681418   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:29.681425   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:29.681481   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:29.717988   64287 cri.go:89] found id: ""
	I1009 20:19:29.718012   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.718020   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:29.718026   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:29.718076   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:29.752783   64287 cri.go:89] found id: ""
	I1009 20:19:29.752815   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.752825   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:29.752833   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:29.752883   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:29.786079   64287 cri.go:89] found id: ""
	I1009 20:19:29.786105   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.786114   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:29.786120   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:29.786167   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:29.820630   64287 cri.go:89] found id: ""
	I1009 20:19:29.820655   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.820663   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:29.820669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:29.820727   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:29.855992   64287 cri.go:89] found id: ""
	I1009 20:19:29.856022   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.856033   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:29.856040   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:29.856096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:29.891196   64287 cri.go:89] found id: ""
	I1009 20:19:29.891224   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.891234   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:29.891244   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:29.891257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:29.945636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:29.945665   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.959715   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:29.959741   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:30.034023   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:30.034046   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:30.034066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:30.109512   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:30.109545   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.651252   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:32.665196   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:32.665253   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:32.701468   64287 cri.go:89] found id: ""
	I1009 20:19:32.701497   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.701516   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:32.701525   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:32.701581   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:32.740585   64287 cri.go:89] found id: ""
	I1009 20:19:32.740611   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.740623   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:32.740629   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:32.740699   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:32.773765   64287 cri.go:89] found id: ""
	I1009 20:19:32.773792   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.773803   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:32.773810   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:32.773869   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:32.812647   64287 cri.go:89] found id: ""
	I1009 20:19:32.812680   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.812695   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:32.812702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:32.812752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:32.847044   64287 cri.go:89] found id: ""
	I1009 20:19:32.847092   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.847101   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:32.847107   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:32.847153   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:32.885410   64287 cri.go:89] found id: ""
	I1009 20:19:32.885439   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.885448   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:32.885455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:32.885515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:32.922917   64287 cri.go:89] found id: ""
	I1009 20:19:32.922944   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.922955   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:32.922963   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:32.923026   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:32.958993   64287 cri.go:89] found id: ""
	I1009 20:19:32.959019   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.959027   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:32.959037   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:32.959052   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.996844   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:32.996871   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:33.047684   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:33.047715   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:33.061829   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:33.061856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:33.135278   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:33.135302   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:33.135314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:33.380221   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.380648   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:34.897646   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:36.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.081326   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:37.580347   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.722479   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:35.736670   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:35.736745   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:35.778594   64287 cri.go:89] found id: ""
	I1009 20:19:35.778617   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.778625   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:35.778630   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:35.778677   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:35.810906   64287 cri.go:89] found id: ""
	I1009 20:19:35.810934   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.810945   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:35.810954   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:35.811014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:35.846226   64287 cri.go:89] found id: ""
	I1009 20:19:35.846258   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.846269   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:35.846277   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:35.846325   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:35.880509   64287 cri.go:89] found id: ""
	I1009 20:19:35.880536   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.880547   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:35.880555   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:35.880613   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:35.916039   64287 cri.go:89] found id: ""
	I1009 20:19:35.916067   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.916077   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:35.916085   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:35.916142   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:35.948068   64287 cri.go:89] found id: ""
	I1009 20:19:35.948099   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.948107   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:35.948113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:35.948168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:35.982531   64287 cri.go:89] found id: ""
	I1009 20:19:35.982556   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.982565   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:35.982571   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:35.982618   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:36.016284   64287 cri.go:89] found id: ""
	I1009 20:19:36.016307   64287 logs.go:282] 0 containers: []
	W1009 20:19:36.016314   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:36.016324   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:36.016333   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:36.096773   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:36.096807   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:36.135382   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:36.135408   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:36.189157   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:36.189189   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:36.202243   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:36.202272   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:36.289968   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:38.790894   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:38.804960   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:38.805020   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:38.840867   64287 cri.go:89] found id: ""
	I1009 20:19:38.840891   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.840898   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:38.840904   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:38.840961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:38.877659   64287 cri.go:89] found id: ""
	I1009 20:19:38.877686   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.877695   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:38.877709   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:38.877768   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:38.917914   64287 cri.go:89] found id: ""
	I1009 20:19:38.917938   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.917947   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:38.917954   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:38.918011   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:38.955879   64287 cri.go:89] found id: ""
	I1009 20:19:38.955907   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.955918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:38.955925   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:38.955985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:38.991683   64287 cri.go:89] found id: ""
	I1009 20:19:38.991712   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.991723   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:38.991730   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:38.991815   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:39.026167   64287 cri.go:89] found id: ""
	I1009 20:19:39.026192   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.026199   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:39.026205   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:39.026273   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:39.061646   64287 cri.go:89] found id: ""
	I1009 20:19:39.061676   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.061692   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:39.061699   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:39.061760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:39.097660   64287 cri.go:89] found id: ""
	I1009 20:19:39.097687   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.097696   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:39.097706   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:39.097720   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:39.149199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:39.149232   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:39.162366   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:39.162391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:39.237267   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:39.237295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:39.237310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:39.320531   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:39.320566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:37.882355   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:40.380792   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.381234   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:38.899362   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.397980   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:39.580565   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.081212   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.865807   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:41.880948   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:41.881015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:41.917675   64287 cri.go:89] found id: ""
	I1009 20:19:41.917703   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.917714   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:41.917722   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:41.917780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:41.957152   64287 cri.go:89] found id: ""
	I1009 20:19:41.957180   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.957189   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:41.957194   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:41.957250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:42.008129   64287 cri.go:89] found id: ""
	I1009 20:19:42.008153   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.008162   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:42.008170   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:42.008232   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:42.042628   64287 cri.go:89] found id: ""
	I1009 20:19:42.042651   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.042658   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:42.042669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:42.042712   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:42.080123   64287 cri.go:89] found id: ""
	I1009 20:19:42.080147   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.080155   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:42.080161   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:42.080214   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:42.120070   64287 cri.go:89] found id: ""
	I1009 20:19:42.120099   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.120108   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:42.120114   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:42.120161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:42.153686   64287 cri.go:89] found id: ""
	I1009 20:19:42.153717   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.153727   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:42.153735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:42.153805   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:42.187793   64287 cri.go:89] found id: ""
	I1009 20:19:42.187820   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.187832   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:42.187842   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:42.187856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:42.267510   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:42.267545   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:42.267559   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:42.348061   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:42.348095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:42.393407   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:42.393431   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:42.448547   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:42.448580   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:44.381312   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:46.881511   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:43.398743   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:45.398982   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.898041   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.081990   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.963603   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:44.977341   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:44.977417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:45.018729   64287 cri.go:89] found id: ""
	I1009 20:19:45.018756   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.018764   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:45.018770   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:45.018821   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:45.055232   64287 cri.go:89] found id: ""
	I1009 20:19:45.055259   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.055267   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:45.055273   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:45.055332   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:45.090575   64287 cri.go:89] found id: ""
	I1009 20:19:45.090604   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.090614   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:45.090620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:45.090692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:45.126426   64287 cri.go:89] found id: ""
	I1009 20:19:45.126452   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.126459   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:45.126465   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:45.126523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:45.166192   64287 cri.go:89] found id: ""
	I1009 20:19:45.166223   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.166232   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:45.166239   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:45.166301   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:45.200353   64287 cri.go:89] found id: ""
	I1009 20:19:45.200384   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.200400   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:45.200406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:45.200454   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:45.235696   64287 cri.go:89] found id: ""
	I1009 20:19:45.235729   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.235740   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:45.235747   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:45.235807   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:45.271937   64287 cri.go:89] found id: ""
	I1009 20:19:45.271969   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.271979   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:45.271990   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:45.272004   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:45.347600   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:45.347635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:45.392203   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:45.392229   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:45.444012   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:45.444045   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:45.458106   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:45.458130   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:45.540275   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.041410   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:48.057834   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:48.057889   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:48.094318   64287 cri.go:89] found id: ""
	I1009 20:19:48.094346   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.094355   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:48.094362   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:48.094406   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:48.129645   64287 cri.go:89] found id: ""
	I1009 20:19:48.129672   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.129683   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:48.129691   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:48.129743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:48.164423   64287 cri.go:89] found id: ""
	I1009 20:19:48.164446   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.164454   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:48.164460   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:48.164519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:48.197708   64287 cri.go:89] found id: ""
	I1009 20:19:48.197736   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.197745   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:48.197750   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:48.197796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:48.235885   64287 cri.go:89] found id: ""
	I1009 20:19:48.235913   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.235925   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:48.235931   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:48.235995   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:48.272458   64287 cri.go:89] found id: ""
	I1009 20:19:48.272492   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.272504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:48.272513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:48.272580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:48.307152   64287 cri.go:89] found id: ""
	I1009 20:19:48.307180   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.307190   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:48.307197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:48.307255   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:48.347335   64287 cri.go:89] found id: ""
	I1009 20:19:48.347366   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.347376   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:48.347387   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:48.347401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:48.418125   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:48.418161   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:48.433361   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:48.433386   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:48.524863   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.524879   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:48.524890   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:48.612196   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:48.612247   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:49.380735   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.898962   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.899005   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.581882   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.582193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.149683   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:51.164603   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:51.164663   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:51.197120   64287 cri.go:89] found id: ""
	I1009 20:19:51.197151   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.197162   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:51.197170   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:51.197228   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:51.233612   64287 cri.go:89] found id: ""
	I1009 20:19:51.233641   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.233651   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:51.233660   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:51.233726   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:51.267119   64287 cri.go:89] found id: ""
	I1009 20:19:51.267150   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.267159   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:51.267168   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:51.267233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:51.301816   64287 cri.go:89] found id: ""
	I1009 20:19:51.301845   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.301854   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:51.301859   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:51.301917   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:51.335483   64287 cri.go:89] found id: ""
	I1009 20:19:51.335524   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.335535   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:51.335543   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:51.335604   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:51.370207   64287 cri.go:89] found id: ""
	I1009 20:19:51.370241   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.370252   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:51.370258   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:51.370320   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:51.406925   64287 cri.go:89] found id: ""
	I1009 20:19:51.406949   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.406956   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:51.406962   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:51.407015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:51.446354   64287 cri.go:89] found id: ""
	I1009 20:19:51.446378   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.446386   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:51.446394   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:51.446405   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:51.496627   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:51.496657   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:51.509587   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:51.509610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:51.583276   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:51.583295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:51.583306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:51.661552   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:51.661584   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:54.202782   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:54.227761   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:54.227829   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:54.261338   64287 cri.go:89] found id: ""
	I1009 20:19:54.261366   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.261374   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:54.261381   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:54.261435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:54.300387   64287 cri.go:89] found id: ""
	I1009 20:19:54.300414   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.300424   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:54.300429   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:54.300485   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:54.339083   64287 cri.go:89] found id: ""
	I1009 20:19:54.339110   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.339122   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:54.339129   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:54.339180   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:54.374145   64287 cri.go:89] found id: ""
	I1009 20:19:54.374174   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.374182   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:54.374188   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:54.374240   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:54.411872   64287 cri.go:89] found id: ""
	I1009 20:19:54.411904   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.411918   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:54.411926   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:54.411992   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:54.449459   64287 cri.go:89] found id: ""
	I1009 20:19:54.449493   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.449504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:54.449512   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:54.449575   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:54.482728   64287 cri.go:89] found id: ""
	I1009 20:19:54.482752   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.482762   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:54.482770   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:54.482830   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:54.516220   64287 cri.go:89] found id: ""
	I1009 20:19:54.516252   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.516261   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:54.516270   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:54.516280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:54.569531   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:54.569560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:54.583371   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:54.583395   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:53.880843   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.381025   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.399599   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.399727   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.080838   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.081451   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:54.651718   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:54.651742   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:54.651757   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:54.728869   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:54.728903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.270702   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:57.284287   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:57.284351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:57.317235   64287 cri.go:89] found id: ""
	I1009 20:19:57.317269   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.317279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:57.317290   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:57.317349   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:57.350030   64287 cri.go:89] found id: ""
	I1009 20:19:57.350058   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.350066   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:57.350071   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:57.350118   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:57.382840   64287 cri.go:89] found id: ""
	I1009 20:19:57.382867   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.382877   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:57.382884   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:57.382935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:57.417193   64287 cri.go:89] found id: ""
	I1009 20:19:57.417229   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.417239   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:57.417247   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:57.417309   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:57.456417   64287 cri.go:89] found id: ""
	I1009 20:19:57.456445   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.456454   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:57.456461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:57.456523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:57.490156   64287 cri.go:89] found id: ""
	I1009 20:19:57.490185   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.490193   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:57.490199   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:57.490246   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:57.523983   64287 cri.go:89] found id: ""
	I1009 20:19:57.524013   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.524023   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:57.524030   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:57.524093   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:57.562288   64287 cri.go:89] found id: ""
	I1009 20:19:57.562317   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.562325   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:57.562334   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:57.562345   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.602475   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:57.602502   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:57.656636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:57.656668   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:57.670738   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:57.670765   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:57.742943   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:57.742968   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:57.742979   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:58.384537   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.881670   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.897654   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.899099   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:02.899381   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.581059   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:01.081778   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.321926   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:00.335475   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:00.335546   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:00.369727   64287 cri.go:89] found id: ""
	I1009 20:20:00.369762   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.369770   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:00.369776   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:00.369823   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:00.408917   64287 cri.go:89] found id: ""
	I1009 20:20:00.408943   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.408953   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:00.408964   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:00.409013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:00.447646   64287 cri.go:89] found id: ""
	I1009 20:20:00.447676   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.447687   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:00.447694   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:00.447754   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:00.485752   64287 cri.go:89] found id: ""
	I1009 20:20:00.485780   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.485790   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:00.485797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:00.485859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:00.519568   64287 cri.go:89] found id: ""
	I1009 20:20:00.519592   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.519600   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:00.519606   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:00.519667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:00.553288   64287 cri.go:89] found id: ""
	I1009 20:20:00.553323   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.553334   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:00.553342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:00.553402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:00.593842   64287 cri.go:89] found id: ""
	I1009 20:20:00.593868   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.593875   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:00.593882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:00.593938   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:00.630808   64287 cri.go:89] found id: ""
	I1009 20:20:00.630839   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.630849   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:00.630859   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:00.630873   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:00.681858   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:00.681888   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:00.695365   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:00.695391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:00.768651   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:00.768679   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:00.768693   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:00.843999   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:00.844034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.390483   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:03.405406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:03.405476   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:03.440025   64287 cri.go:89] found id: ""
	I1009 20:20:03.440048   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.440055   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:03.440061   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:03.440113   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:03.475407   64287 cri.go:89] found id: ""
	I1009 20:20:03.475440   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.475450   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:03.475456   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:03.475511   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:03.512656   64287 cri.go:89] found id: ""
	I1009 20:20:03.512680   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.512688   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:03.512693   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:03.512749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:03.549174   64287 cri.go:89] found id: ""
	I1009 20:20:03.549204   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.549212   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:03.549217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:03.549282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:03.586093   64287 cri.go:89] found id: ""
	I1009 20:20:03.586118   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.586128   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:03.586135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:03.586201   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:03.624221   64287 cri.go:89] found id: ""
	I1009 20:20:03.624248   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.624258   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:03.624271   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:03.624342   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:03.658759   64287 cri.go:89] found id: ""
	I1009 20:20:03.658781   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.658789   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:03.658794   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:03.658850   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:03.692200   64287 cri.go:89] found id: ""
	I1009 20:20:03.692227   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.692237   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:03.692247   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:03.692263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:03.745949   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:03.745985   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:03.759691   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:03.759724   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:03.833000   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:03.833020   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:03.833034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:03.911321   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:03.911352   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.381014   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.881096   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:04.900780   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:07.398348   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:03.580442   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.582159   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:08.080528   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:06.451158   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:06.466356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:06.466435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:06.502907   64287 cri.go:89] found id: ""
	I1009 20:20:06.502936   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.502944   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:06.502950   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:06.503000   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:06.540938   64287 cri.go:89] found id: ""
	I1009 20:20:06.540961   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.540969   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:06.540974   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:06.541033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:06.575587   64287 cri.go:89] found id: ""
	I1009 20:20:06.575616   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.575632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:06.575640   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:06.575696   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:06.611052   64287 cri.go:89] found id: ""
	I1009 20:20:06.611093   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.611103   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:06.611110   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:06.611170   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:06.647763   64287 cri.go:89] found id: ""
	I1009 20:20:06.647793   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.647804   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:06.647811   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:06.647876   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:06.682423   64287 cri.go:89] found id: ""
	I1009 20:20:06.682449   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.682460   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:06.682471   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:06.682541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:06.718096   64287 cri.go:89] found id: ""
	I1009 20:20:06.718124   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.718135   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:06.718141   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:06.718200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:06.753320   64287 cri.go:89] found id: ""
	I1009 20:20:06.753344   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.753353   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:06.753361   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:06.753375   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:06.809610   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:06.809640   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:06.823651   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:06.823680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:06.895796   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:06.895819   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:06.895833   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:06.972602   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:06.972635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:09.513909   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:09.527143   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:09.527254   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:09.560406   64287 cri.go:89] found id: ""
	I1009 20:20:09.560432   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.560440   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:09.560445   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:09.560493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:09.600180   64287 cri.go:89] found id: ""
	I1009 20:20:09.600202   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.600219   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:09.600225   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:09.600285   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:08.380652   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.880056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.398968   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:11.897696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.081007   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:12.081291   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.638356   64287 cri.go:89] found id: ""
	I1009 20:20:09.638383   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.638393   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:09.638398   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:09.638450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:09.680589   64287 cri.go:89] found id: ""
	I1009 20:20:09.680616   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.680627   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:09.680635   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:09.680686   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:09.719018   64287 cri.go:89] found id: ""
	I1009 20:20:09.719041   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.719049   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:09.719054   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:09.719129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:09.757262   64287 cri.go:89] found id: ""
	I1009 20:20:09.757290   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.757298   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:09.757305   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:09.757364   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:09.796127   64287 cri.go:89] found id: ""
	I1009 20:20:09.796157   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.796168   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:09.796176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:09.796236   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:09.830650   64287 cri.go:89] found id: ""
	I1009 20:20:09.830679   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.830689   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:09.830699   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:09.830713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:09.882638   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:09.882666   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:09.897458   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:09.897488   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:09.964440   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:09.964462   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:09.964473   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:10.040103   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:10.040138   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.590159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:12.603380   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:12.603448   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:12.636246   64287 cri.go:89] found id: ""
	I1009 20:20:12.636272   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.636281   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:12.636288   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:12.636392   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:12.669400   64287 cri.go:89] found id: ""
	I1009 20:20:12.669429   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.669439   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:12.669446   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:12.669493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:12.705076   64287 cri.go:89] found id: ""
	I1009 20:20:12.705104   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.705114   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:12.705122   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:12.705198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:12.738883   64287 cri.go:89] found id: ""
	I1009 20:20:12.738914   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.738926   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:12.738933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:12.738988   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:12.773549   64287 cri.go:89] found id: ""
	I1009 20:20:12.773572   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.773580   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:12.773592   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:12.773709   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:12.813123   64287 cri.go:89] found id: ""
	I1009 20:20:12.813148   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.813156   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:12.813162   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:12.813215   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:12.851272   64287 cri.go:89] found id: ""
	I1009 20:20:12.851305   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.851317   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:12.851325   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:12.851389   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:12.891399   64287 cri.go:89] found id: ""
	I1009 20:20:12.891422   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.891429   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:12.891436   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:12.891455   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:12.945839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:12.945868   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:12.959711   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:12.959735   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:13.028015   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:13.028034   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:13.028048   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:13.108451   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:13.108491   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.881443   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.381891   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.398650   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.401925   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.580306   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.580836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.651166   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:15.664618   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:15.664692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:15.697088   64287 cri.go:89] found id: ""
	I1009 20:20:15.697117   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.697127   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:15.697137   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:15.697198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:15.738641   64287 cri.go:89] found id: ""
	I1009 20:20:15.738671   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.738682   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:15.738690   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:15.738747   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:15.771293   64287 cri.go:89] found id: ""
	I1009 20:20:15.771318   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.771326   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:15.771332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:15.771391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:15.804234   64287 cri.go:89] found id: ""
	I1009 20:20:15.804263   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.804271   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:15.804279   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:15.804329   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:15.840914   64287 cri.go:89] found id: ""
	I1009 20:20:15.840964   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.840975   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:15.840983   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:15.841041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:15.878243   64287 cri.go:89] found id: ""
	I1009 20:20:15.878270   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.878280   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:15.878288   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:15.878344   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:15.917371   64287 cri.go:89] found id: ""
	I1009 20:20:15.917398   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.917409   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:15.917416   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:15.917473   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:15.951443   64287 cri.go:89] found id: ""
	I1009 20:20:15.951470   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.951481   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:15.951490   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:15.951504   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:16.017601   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:16.017629   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:16.017643   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:16.095915   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:16.095946   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:16.141704   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:16.141737   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:16.197391   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:16.197424   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:18.712278   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:18.725451   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:18.725513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:18.757618   64287 cri.go:89] found id: ""
	I1009 20:20:18.757640   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.757650   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:18.757657   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:18.757715   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:18.791651   64287 cri.go:89] found id: ""
	I1009 20:20:18.791677   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.791686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:18.791693   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:18.791750   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:18.826402   64287 cri.go:89] found id: ""
	I1009 20:20:18.826430   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.826440   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:18.826449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:18.826522   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:18.868610   64287 cri.go:89] found id: ""
	I1009 20:20:18.868634   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.868644   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:18.868652   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:18.868710   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:18.905499   64287 cri.go:89] found id: ""
	I1009 20:20:18.905520   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.905527   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:18.905532   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:18.905588   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:18.938772   64287 cri.go:89] found id: ""
	I1009 20:20:18.938795   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.938803   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:18.938809   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:18.938855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:18.974712   64287 cri.go:89] found id: ""
	I1009 20:20:18.974742   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.974753   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:18.974760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:18.974820   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:19.008681   64287 cri.go:89] found id: ""
	I1009 20:20:19.008710   64287 logs.go:282] 0 containers: []
	W1009 20:20:19.008718   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:19.008726   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:19.008736   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:19.059862   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:19.059891   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:19.073071   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:19.073096   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:19.142163   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:19.142189   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:19.142204   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:19.226645   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:19.226691   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:17.880874   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.881553   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:18.898733   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:20.899569   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.081883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.581532   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.767167   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:21.780448   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:21.780530   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:21.813670   64287 cri.go:89] found id: ""
	I1009 20:20:21.813699   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.813708   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:21.813714   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:21.813760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:21.850793   64287 cri.go:89] found id: ""
	I1009 20:20:21.850826   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.850838   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:21.850845   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:21.850904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:21.887886   64287 cri.go:89] found id: ""
	I1009 20:20:21.887919   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.887931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:21.887938   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:21.887987   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:21.926620   64287 cri.go:89] found id: ""
	I1009 20:20:21.926651   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.926661   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:21.926669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:21.926734   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:21.962822   64287 cri.go:89] found id: ""
	I1009 20:20:21.962859   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.962867   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:21.962872   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:21.962932   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:22.001043   64287 cri.go:89] found id: ""
	I1009 20:20:22.001070   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.001080   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:22.001088   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:22.001145   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:22.034111   64287 cri.go:89] found id: ""
	I1009 20:20:22.034139   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.034148   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:22.034153   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:22.034200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:22.067601   64287 cri.go:89] found id: ""
	I1009 20:20:22.067629   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.067640   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:22.067649   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:22.067663   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:22.081545   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:22.081575   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:22.158725   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:22.158749   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:22.158761   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:22.249086   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:22.249133   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:22.287435   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:22.287462   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:24.380294   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.880564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:23.398659   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:25.399216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:27.898475   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.580818   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.838935   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:24.852057   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:24.852126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:24.887454   64287 cri.go:89] found id: ""
	I1009 20:20:24.887488   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.887500   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:24.887507   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:24.887565   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:24.928273   64287 cri.go:89] found id: ""
	I1009 20:20:24.928295   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.928303   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:24.928309   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:24.928367   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:24.962116   64287 cri.go:89] found id: ""
	I1009 20:20:24.962152   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.962164   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:24.962172   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:24.962252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:24.996909   64287 cri.go:89] found id: ""
	I1009 20:20:24.996934   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.996942   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:24.996947   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:24.996996   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:25.030615   64287 cri.go:89] found id: ""
	I1009 20:20:25.030647   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.030658   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:25.030665   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:25.030725   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:25.066069   64287 cri.go:89] found id: ""
	I1009 20:20:25.066096   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.066104   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:25.066109   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:25.066158   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:25.101762   64287 cri.go:89] found id: ""
	I1009 20:20:25.101791   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.101799   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:25.101807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:25.101854   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:25.139704   64287 cri.go:89] found id: ""
	I1009 20:20:25.139730   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.139738   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:25.139745   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:25.139756   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:25.190212   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:25.190257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:25.206181   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:25.206206   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:25.276523   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:25.276548   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:25.276562   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:25.352477   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:25.352509   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:27.894112   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:27.907965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:27.908018   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:27.942933   64287 cri.go:89] found id: ""
	I1009 20:20:27.942959   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.942967   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:27.942973   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:27.943029   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:27.995890   64287 cri.go:89] found id: ""
	I1009 20:20:27.995917   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.995929   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:27.995936   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:27.995985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:28.031877   64287 cri.go:89] found id: ""
	I1009 20:20:28.031904   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.031914   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:28.031922   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:28.031975   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:28.073691   64287 cri.go:89] found id: ""
	I1009 20:20:28.073720   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.073730   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:28.073738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:28.073796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:28.109946   64287 cri.go:89] found id: ""
	I1009 20:20:28.109975   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.109987   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:28.109995   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:28.110041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:28.144771   64287 cri.go:89] found id: ""
	I1009 20:20:28.144801   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.144822   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:28.144830   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:28.144892   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:28.179617   64287 cri.go:89] found id: ""
	I1009 20:20:28.179640   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.179647   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:28.179653   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:28.179698   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:28.213734   64287 cri.go:89] found id: ""
	I1009 20:20:28.213759   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.213767   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:28.213775   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:28.213787   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:28.227778   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:28.227803   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:28.298025   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:28.298057   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:28.298071   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:28.378664   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:28.378700   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:28.417577   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:28.417602   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:29.380480   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.382239   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.396952   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:32.399211   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:29.079718   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.083332   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.968360   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:30.981229   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:30.981295   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:31.013373   64287 cri.go:89] found id: ""
	I1009 20:20:31.013397   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.013408   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:31.013415   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:31.013468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:31.044387   64287 cri.go:89] found id: ""
	I1009 20:20:31.044408   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.044416   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:31.044421   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:31.044490   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:31.079677   64287 cri.go:89] found id: ""
	I1009 20:20:31.079702   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.079718   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:31.079727   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:31.079788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:31.118895   64287 cri.go:89] found id: ""
	I1009 20:20:31.118921   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.118933   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:31.118940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:31.118997   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:31.157008   64287 cri.go:89] found id: ""
	I1009 20:20:31.157035   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.157043   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:31.157049   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:31.157096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:31.188999   64287 cri.go:89] found id: ""
	I1009 20:20:31.189024   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.189032   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:31.189038   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:31.189095   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:31.225314   64287 cri.go:89] found id: ""
	I1009 20:20:31.225341   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.225351   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:31.225359   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:31.225426   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:31.259864   64287 cri.go:89] found id: ""
	I1009 20:20:31.259891   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.259899   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:31.259907   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:31.259918   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:31.333579   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:31.333615   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:31.375852   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:31.375884   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:31.428346   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:31.428377   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:31.442927   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:31.442951   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:31.512924   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:34.013346   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:34.026671   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:34.026729   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:34.062445   64287 cri.go:89] found id: ""
	I1009 20:20:34.062469   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.062479   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:34.062487   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:34.062586   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:34.096670   64287 cri.go:89] found id: ""
	I1009 20:20:34.096692   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.096699   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:34.096705   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:34.096752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:34.133653   64287 cri.go:89] found id: ""
	I1009 20:20:34.133682   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.133702   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:34.133711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:34.133770   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:34.167514   64287 cri.go:89] found id: ""
	I1009 20:20:34.167541   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.167552   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:34.167560   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:34.167631   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:34.200397   64287 cri.go:89] found id: ""
	I1009 20:20:34.200427   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.200438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:34.200446   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:34.200504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:34.236507   64287 cri.go:89] found id: ""
	I1009 20:20:34.236534   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.236544   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:34.236551   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:34.236611   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:34.272611   64287 cri.go:89] found id: ""
	I1009 20:20:34.272639   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.272650   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:34.272658   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:34.272733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:34.311392   64287 cri.go:89] found id: ""
	I1009 20:20:34.311417   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.311426   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:34.311434   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:34.311445   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:34.401718   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:34.401751   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:34.463768   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:34.463798   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:34.526313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:34.526347   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:34.540370   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:34.540401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:34.610697   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:33.880836   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:35.881010   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:34.399526   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.401486   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:33.581544   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.080875   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.085744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:37.111821   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:37.125012   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:37.125073   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:37.165105   64287 cri.go:89] found id: ""
	I1009 20:20:37.165135   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.165144   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:37.165151   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:37.165217   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:37.201367   64287 cri.go:89] found id: ""
	I1009 20:20:37.201393   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.201403   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:37.201412   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:37.201470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:37.234258   64287 cri.go:89] found id: ""
	I1009 20:20:37.234283   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.234291   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:37.234297   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:37.234351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:37.270765   64287 cri.go:89] found id: ""
	I1009 20:20:37.270790   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.270798   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:37.270803   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:37.270855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:37.303931   64287 cri.go:89] found id: ""
	I1009 20:20:37.303962   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.303970   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:37.303976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:37.304035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:37.339438   64287 cri.go:89] found id: ""
	I1009 20:20:37.339466   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.339476   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:37.339484   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:37.339544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:37.371538   64287 cri.go:89] found id: ""
	I1009 20:20:37.371565   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.371576   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:37.371584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:37.371644   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:37.414729   64287 cri.go:89] found id: ""
	I1009 20:20:37.414775   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.414785   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:37.414803   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:37.414818   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:37.453989   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:37.454013   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:37.504516   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:37.504551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:37.520317   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:37.520353   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:37.590144   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:37.590163   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:37.590175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:38.381407   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.381518   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.897837   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.897916   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.898202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.582744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.167604   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:40.191718   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:40.191788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:40.247439   64287 cri.go:89] found id: ""
	I1009 20:20:40.247467   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.247475   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:40.247482   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:40.247549   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:40.284012   64287 cri.go:89] found id: ""
	I1009 20:20:40.284043   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.284055   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:40.284063   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:40.284124   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:40.321347   64287 cri.go:89] found id: ""
	I1009 20:20:40.321378   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.321386   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:40.321391   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:40.321456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:40.364063   64287 cri.go:89] found id: ""
	I1009 20:20:40.364084   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.364092   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:40.364098   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:40.364152   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:40.400423   64287 cri.go:89] found id: ""
	I1009 20:20:40.400449   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.400458   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:40.400467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:40.400525   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:40.434538   64287 cri.go:89] found id: ""
	I1009 20:20:40.434567   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.434576   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:40.434584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:40.434647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:40.468860   64287 cri.go:89] found id: ""
	I1009 20:20:40.468909   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.468921   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:40.468928   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:40.468990   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:40.501583   64287 cri.go:89] found id: ""
	I1009 20:20:40.501607   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.501615   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:40.501624   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:40.501639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:40.558878   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:40.558919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:40.573191   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:40.573218   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:40.640959   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:40.640980   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:40.640996   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:40.716475   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:40.716510   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.255685   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:43.269113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:43.269182   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:43.305892   64287 cri.go:89] found id: ""
	I1009 20:20:43.305920   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.305931   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:43.305939   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:43.305999   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:43.341486   64287 cri.go:89] found id: ""
	I1009 20:20:43.341515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.341525   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:43.341532   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:43.341592   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:43.375473   64287 cri.go:89] found id: ""
	I1009 20:20:43.375496   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.375506   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:43.375513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:43.375577   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:43.411235   64287 cri.go:89] found id: ""
	I1009 20:20:43.411259   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.411268   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:43.411274   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:43.411330   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:43.444884   64287 cri.go:89] found id: ""
	I1009 20:20:43.444914   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.444926   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:43.444933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:43.444993   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:43.479151   64287 cri.go:89] found id: ""
	I1009 20:20:43.479177   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.479187   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:43.479195   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:43.479261   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:43.512485   64287 cri.go:89] found id: ""
	I1009 20:20:43.512515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.512523   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:43.512530   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:43.512580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:43.546511   64287 cri.go:89] found id: ""
	I1009 20:20:43.546533   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.546541   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:43.546549   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:43.546561   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:43.623938   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:43.623970   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.667655   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:43.667680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:43.724747   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:43.724778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:43.740060   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:43.740081   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:43.820910   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:42.880030   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:44.880596   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.880640   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.399270   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.899013   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.081796   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.580573   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.321796   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:46.337028   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:46.337086   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:46.374564   64287 cri.go:89] found id: ""
	I1009 20:20:46.374587   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.374595   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:46.374601   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:46.374662   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:46.411418   64287 cri.go:89] found id: ""
	I1009 20:20:46.411453   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.411470   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:46.411477   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:46.411535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:46.447726   64287 cri.go:89] found id: ""
	I1009 20:20:46.447750   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.447758   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:46.447763   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:46.447818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:46.484691   64287 cri.go:89] found id: ""
	I1009 20:20:46.484721   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.484731   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:46.484738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:46.484799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:46.525017   64287 cri.go:89] found id: ""
	I1009 20:20:46.525052   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.525064   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:46.525071   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:46.525129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:46.562306   64287 cri.go:89] found id: ""
	I1009 20:20:46.562334   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.562342   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:46.562350   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:46.562417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:46.598067   64287 cri.go:89] found id: ""
	I1009 20:20:46.598099   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.598110   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:46.598117   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:46.598179   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:46.639484   64287 cri.go:89] found id: ""
	I1009 20:20:46.639515   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.639526   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:46.639537   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:46.639551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:46.694106   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:46.694140   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:46.709475   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:46.709501   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:46.781281   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:46.781308   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:46.781322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:46.862224   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:46.862262   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:49.402786   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:49.417432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:49.417537   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:49.454253   64287 cri.go:89] found id: ""
	I1009 20:20:49.454286   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.454296   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:49.454305   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:49.454366   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:49.490198   64287 cri.go:89] found id: ""
	I1009 20:20:49.490223   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.490234   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:49.490241   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:49.490307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:49.524286   64287 cri.go:89] found id: ""
	I1009 20:20:49.524312   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.524322   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:49.524330   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:49.524388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:49.566415   64287 cri.go:89] found id: ""
	I1009 20:20:49.566444   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.566455   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:49.566462   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:49.566529   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:49.604306   64287 cri.go:89] found id: ""
	I1009 20:20:49.604335   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.604346   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:49.604353   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:49.604414   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:48.880756   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:51.381546   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:50.398989   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.399159   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.581256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.081420   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.638514   64287 cri.go:89] found id: ""
	I1009 20:20:49.638543   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.638560   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:49.638568   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:49.638630   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:49.672158   64287 cri.go:89] found id: ""
	I1009 20:20:49.672182   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.672191   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:49.672197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:49.672250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:49.709865   64287 cri.go:89] found id: ""
	I1009 20:20:49.709887   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.709897   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:49.709907   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:49.709919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:49.762184   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:49.762220   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:49.775852   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:49.775880   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:49.850309   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:49.850329   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:49.850343   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:49.930225   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:49.930266   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:52.470580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:52.484087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:52.484141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:52.517440   64287 cri.go:89] found id: ""
	I1009 20:20:52.517461   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.517469   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:52.517475   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:52.517519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:52.550340   64287 cri.go:89] found id: ""
	I1009 20:20:52.550380   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.550392   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:52.550399   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:52.550468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:52.586444   64287 cri.go:89] found id: ""
	I1009 20:20:52.586478   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.586488   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:52.586495   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:52.586551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:52.620461   64287 cri.go:89] found id: ""
	I1009 20:20:52.620488   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.620499   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:52.620506   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:52.620566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:52.656032   64287 cri.go:89] found id: ""
	I1009 20:20:52.656063   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.656074   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:52.656082   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:52.656144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:52.687083   64287 cri.go:89] found id: ""
	I1009 20:20:52.687110   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.687118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:52.687124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:52.687187   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:52.723413   64287 cri.go:89] found id: ""
	I1009 20:20:52.723442   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.723453   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:52.723461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:52.723521   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:52.754656   64287 cri.go:89] found id: ""
	I1009 20:20:52.754687   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.754698   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:52.754709   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:52.754721   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:52.807359   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:52.807398   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:52.821469   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:52.821500   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:52.893447   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:52.893470   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:52.893484   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:52.970051   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:52.970083   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:53.880365   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.881762   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.898472   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:57.397863   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.580495   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:56.581092   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.508078   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:55.521951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:55.522012   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:55.556291   64287 cri.go:89] found id: ""
	I1009 20:20:55.556316   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.556324   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:55.556329   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:55.556380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:55.591032   64287 cri.go:89] found id: ""
	I1009 20:20:55.591059   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.591079   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:55.591086   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:55.591141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:55.636196   64287 cri.go:89] found id: ""
	I1009 20:20:55.636228   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.636239   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:55.636246   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:55.636310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:55.673291   64287 cri.go:89] found id: ""
	I1009 20:20:55.673313   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.673321   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:55.673327   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:55.673374   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:55.709457   64287 cri.go:89] found id: ""
	I1009 20:20:55.709486   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.709497   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:55.709504   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:55.709563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:55.748391   64287 cri.go:89] found id: ""
	I1009 20:20:55.748423   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.748434   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:55.748442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:55.748503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:55.780581   64287 cri.go:89] found id: ""
	I1009 20:20:55.780610   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.780620   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:55.780627   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:55.780688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:55.816489   64287 cri.go:89] found id: ""
	I1009 20:20:55.816527   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.816535   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:55.816554   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:55.816568   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:55.871679   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:55.871708   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:55.887895   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:55.887920   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:55.956814   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:55.956838   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:55.956850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:56.031453   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:56.031489   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.569098   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:58.583558   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:58.583626   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:58.622296   64287 cri.go:89] found id: ""
	I1009 20:20:58.622326   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.622334   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:58.622340   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:58.622401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:58.663776   64287 cri.go:89] found id: ""
	I1009 20:20:58.663798   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.663806   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:58.663812   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:58.663858   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:58.699968   64287 cri.go:89] found id: ""
	I1009 20:20:58.699994   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.700002   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:58.700007   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:58.700066   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:58.733935   64287 cri.go:89] found id: ""
	I1009 20:20:58.733959   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.733968   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:58.733974   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:58.734030   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:58.768723   64287 cri.go:89] found id: ""
	I1009 20:20:58.768752   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.768763   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:58.768771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:58.768834   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:58.803129   64287 cri.go:89] found id: ""
	I1009 20:20:58.803153   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.803161   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:58.803166   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:58.803237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:58.836341   64287 cri.go:89] found id: ""
	I1009 20:20:58.836366   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.836374   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:58.836379   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:58.836437   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:58.872048   64287 cri.go:89] found id: ""
	I1009 20:20:58.872071   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.872081   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:58.872091   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:58.872106   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:58.950133   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:58.950167   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.988529   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:58.988555   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:59.038377   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:59.038414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:59.053398   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:59.053448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:59.120793   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:58.380051   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:00.380182   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:59.398592   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.898382   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:58.581266   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.081525   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.621691   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:01.634505   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:01.634563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:01.670785   64287 cri.go:89] found id: ""
	I1009 20:21:01.670818   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.670826   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:01.670833   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:01.670897   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:01.712219   64287 cri.go:89] found id: ""
	I1009 20:21:01.712243   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.712255   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:01.712261   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:01.712307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:01.747175   64287 cri.go:89] found id: ""
	I1009 20:21:01.747204   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.747215   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:01.747222   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:01.747282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:01.785359   64287 cri.go:89] found id: ""
	I1009 20:21:01.785382   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.785389   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:01.785396   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:01.785452   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:01.822385   64287 cri.go:89] found id: ""
	I1009 20:21:01.822415   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.822426   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:01.822433   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:01.822501   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:01.860839   64287 cri.go:89] found id: ""
	I1009 20:21:01.860871   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.860880   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:01.860889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:01.860935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:01.899191   64287 cri.go:89] found id: ""
	I1009 20:21:01.899215   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.899224   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:01.899232   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:01.899288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:01.936692   64287 cri.go:89] found id: ""
	I1009 20:21:01.936721   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.936729   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:01.936737   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:01.936748   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:02.014848   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:02.014883   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:02.058815   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:02.058846   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:02.110513   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:02.110543   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:02.123855   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:02.123878   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:02.193997   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:02.880277   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.881247   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:07.380330   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.398320   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.580574   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.080382   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.081294   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.694766   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:04.707675   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:04.707743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:04.741322   64287 cri.go:89] found id: ""
	I1009 20:21:04.741354   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.741365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:04.741374   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:04.741435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:04.780649   64287 cri.go:89] found id: ""
	I1009 20:21:04.780676   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.780686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:04.780694   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:04.780749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:04.817514   64287 cri.go:89] found id: ""
	I1009 20:21:04.817545   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.817557   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:04.817564   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:04.817672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:04.850848   64287 cri.go:89] found id: ""
	I1009 20:21:04.850871   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.850878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:04.850885   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:04.850942   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:04.885390   64287 cri.go:89] found id: ""
	I1009 20:21:04.885426   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.885438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:04.885449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:04.885513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:04.920199   64287 cri.go:89] found id: ""
	I1009 20:21:04.920221   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.920229   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:04.920235   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:04.920307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:04.954597   64287 cri.go:89] found id: ""
	I1009 20:21:04.954619   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.954627   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:04.954634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:04.954693   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:04.988236   64287 cri.go:89] found id: ""
	I1009 20:21:04.988262   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.988270   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:04.988278   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:04.988289   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:05.039909   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:05.039939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:05.053556   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:05.053583   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:05.126596   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:05.126618   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:05.126628   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:05.202275   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:05.202309   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:07.740836   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:07.754095   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:07.754165   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:07.786584   64287 cri.go:89] found id: ""
	I1009 20:21:07.786613   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.786621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:07.786627   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:07.786672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:07.822365   64287 cri.go:89] found id: ""
	I1009 20:21:07.822388   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.822396   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:07.822410   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:07.822456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:07.858058   64287 cri.go:89] found id: ""
	I1009 20:21:07.858083   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.858093   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:07.858100   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:07.858156   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:07.894319   64287 cri.go:89] found id: ""
	I1009 20:21:07.894345   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.894352   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:07.894358   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:07.894422   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:07.928620   64287 cri.go:89] found id: ""
	I1009 20:21:07.928648   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.928659   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:07.928667   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:07.928724   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:07.964923   64287 cri.go:89] found id: ""
	I1009 20:21:07.964956   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.964967   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:07.964976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:07.965035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:07.998308   64287 cri.go:89] found id: ""
	I1009 20:21:07.998336   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.998347   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:07.998354   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:07.998402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:08.032021   64287 cri.go:89] found id: ""
	I1009 20:21:08.032047   64287 logs.go:282] 0 containers: []
	W1009 20:21:08.032059   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:08.032070   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:08.032084   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:08.103843   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:08.103867   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:08.103882   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:08.185476   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:08.185507   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:08.226967   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:08.226994   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:08.304852   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:08.304887   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:09.389127   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:11.880856   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.399153   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.399356   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:12.897624   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.581193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:13.082124   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.819345   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:10.832902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:10.832963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:10.873237   64287 cri.go:89] found id: ""
	I1009 20:21:10.873268   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.873279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:10.873286   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:10.873350   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:10.907296   64287 cri.go:89] found id: ""
	I1009 20:21:10.907316   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.907324   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:10.907329   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:10.907377   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:10.946428   64287 cri.go:89] found id: ""
	I1009 20:21:10.946469   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.946481   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:10.946487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:10.946540   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:10.982175   64287 cri.go:89] found id: ""
	I1009 20:21:10.982199   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.982207   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:10.982212   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:10.982259   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:11.016197   64287 cri.go:89] found id: ""
	I1009 20:21:11.016220   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.016243   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:11.016250   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:11.016318   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:11.055697   64287 cri.go:89] found id: ""
	I1009 20:21:11.055723   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.055732   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:11.055740   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:11.055806   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:11.093444   64287 cri.go:89] found id: ""
	I1009 20:21:11.093469   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.093480   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:11.093487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:11.093548   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:11.133224   64287 cri.go:89] found id: ""
	I1009 20:21:11.133252   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.133266   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:11.133276   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:11.133291   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:11.189020   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:11.189057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:11.202652   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:11.202682   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:11.272789   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:11.272811   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:11.272824   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:11.354868   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:11.354904   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:13.896655   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:13.910126   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:13.910189   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:13.944472   64287 cri.go:89] found id: ""
	I1009 20:21:13.944497   64287 logs.go:282] 0 containers: []
	W1009 20:21:13.944505   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:13.944511   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:13.944566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:14.003362   64287 cri.go:89] found id: ""
	I1009 20:21:14.003387   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.003397   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:14.003407   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:14.003470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:14.037691   64287 cri.go:89] found id: ""
	I1009 20:21:14.037717   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.037726   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:14.037732   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:14.037792   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:14.079333   64287 cri.go:89] found id: ""
	I1009 20:21:14.079358   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.079368   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:14.079375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:14.079433   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:14.120821   64287 cri.go:89] found id: ""
	I1009 20:21:14.120843   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.120851   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:14.120857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:14.120904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:14.161089   64287 cri.go:89] found id: ""
	I1009 20:21:14.161118   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.161128   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:14.161135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:14.161193   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:14.201711   64287 cri.go:89] found id: ""
	I1009 20:21:14.201739   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.201748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:14.201756   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:14.201814   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:14.238469   64287 cri.go:89] found id: ""
	I1009 20:21:14.238502   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.238512   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:14.238520   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:14.238531   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:14.289786   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:14.289821   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:14.303876   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:14.303903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:14.376426   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:14.376446   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:14.376459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:14.458058   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:14.458095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:14.381278   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:16.381782   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:14.899834   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.398309   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:15.580946   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.574819   63744 pod_ready.go:82] duration metric: took 4m0.000292386s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:17.574851   63744 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:17.574882   63744 pod_ready.go:39] duration metric: took 4m14.424118915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:17.574914   63744 kubeadm.go:597] duration metric: took 4m22.465328757s to restartPrimaryControlPlane
	W1009 20:21:17.574982   63744 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:17.575016   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:17.000623   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:17.015890   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:17.015963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:17.054136   64287 cri.go:89] found id: ""
	I1009 20:21:17.054166   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.054177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:17.054185   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:17.054242   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:17.089501   64287 cri.go:89] found id: ""
	I1009 20:21:17.089538   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.089548   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:17.089556   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:17.089614   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:17.128042   64287 cri.go:89] found id: ""
	I1009 20:21:17.128066   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.128073   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:17.128079   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:17.128126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:17.164663   64287 cri.go:89] found id: ""
	I1009 20:21:17.164689   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.164697   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:17.164703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:17.164766   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:17.200865   64287 cri.go:89] found id: ""
	I1009 20:21:17.200891   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.200899   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:17.200906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:17.200963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:17.241649   64287 cri.go:89] found id: ""
	I1009 20:21:17.241675   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.241683   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:17.241690   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:17.241749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:17.277390   64287 cri.go:89] found id: ""
	I1009 20:21:17.277424   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.277436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:17.277449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:17.277515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:17.316942   64287 cri.go:89] found id: ""
	I1009 20:21:17.316973   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.316985   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:17.316995   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:17.317015   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:17.360293   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:17.360322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:17.413510   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:17.413546   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:17.427280   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:17.427310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:17.509531   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:17.509551   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:17.509566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:18.880550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.881023   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:19.398723   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:21.899259   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.092463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:20.106101   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:20.106168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:20.147889   64287 cri.go:89] found id: ""
	I1009 20:21:20.147916   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.147925   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:20.147931   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:20.147980   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:20.183097   64287 cri.go:89] found id: ""
	I1009 20:21:20.183167   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.183179   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:20.183185   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:20.183233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:20.217556   64287 cri.go:89] found id: ""
	I1009 20:21:20.217585   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.217596   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:20.217604   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:20.217661   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:20.256692   64287 cri.go:89] found id: ""
	I1009 20:21:20.256717   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.256728   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:20.256735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:20.256797   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:20.290866   64287 cri.go:89] found id: ""
	I1009 20:21:20.290888   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.290896   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:20.290902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:20.290954   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:20.326802   64287 cri.go:89] found id: ""
	I1009 20:21:20.326828   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.326836   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:20.326842   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:20.326901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:20.362395   64287 cri.go:89] found id: ""
	I1009 20:21:20.362426   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.362436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:20.362442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:20.362504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:20.408354   64287 cri.go:89] found id: ""
	I1009 20:21:20.408381   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.408391   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:20.408400   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:20.408415   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:20.426669   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:20.426694   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:20.525895   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:20.525927   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:20.525939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:20.612620   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:20.612654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:20.653152   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:20.653179   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.205516   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:23.218432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:23.218493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:23.254327   64287 cri.go:89] found id: ""
	I1009 20:21:23.254355   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.254365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:23.254372   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:23.254429   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:23.295411   64287 cri.go:89] found id: ""
	I1009 20:21:23.295437   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.295448   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:23.295463   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:23.295523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:23.331631   64287 cri.go:89] found id: ""
	I1009 20:21:23.331661   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.331672   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:23.331679   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:23.331742   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:23.366114   64287 cri.go:89] found id: ""
	I1009 20:21:23.366139   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.366147   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:23.366152   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:23.366200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:23.403549   64287 cri.go:89] found id: ""
	I1009 20:21:23.403580   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.403587   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:23.403593   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:23.403652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:23.439231   64287 cri.go:89] found id: ""
	I1009 20:21:23.439254   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.439263   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:23.439268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:23.439322   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:23.473417   64287 cri.go:89] found id: ""
	I1009 20:21:23.473441   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.473449   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:23.473455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:23.473503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:23.506129   64287 cri.go:89] found id: ""
	I1009 20:21:23.506151   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.506159   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:23.506166   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:23.506176   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:23.546813   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:23.546836   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.599317   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:23.599346   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:23.612400   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:23.612426   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:23.684905   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:23.684924   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:23.684936   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:22.881084   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:25.380780   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:27.380875   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:23.899699   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.401044   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.267079   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:26.282873   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:26.282946   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:26.319632   64287 cri.go:89] found id: ""
	I1009 20:21:26.319657   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.319665   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:26.319671   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:26.319716   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:26.362263   64287 cri.go:89] found id: ""
	I1009 20:21:26.362290   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.362299   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:26.362306   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:26.362401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:26.412274   64287 cri.go:89] found id: ""
	I1009 20:21:26.412309   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.412320   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:26.412332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:26.412391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:26.446754   64287 cri.go:89] found id: ""
	I1009 20:21:26.446774   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.446783   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:26.446788   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:26.446838   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:26.480333   64287 cri.go:89] found id: ""
	I1009 20:21:26.480359   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.480367   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:26.480375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:26.480438   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:26.518440   64287 cri.go:89] found id: ""
	I1009 20:21:26.518469   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.518479   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:26.518486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:26.518555   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:26.555100   64287 cri.go:89] found id: ""
	I1009 20:21:26.555127   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.555138   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:26.555146   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:26.555208   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:26.594515   64287 cri.go:89] found id: ""
	I1009 20:21:26.594538   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.594550   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:26.594559   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:26.594573   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:26.647465   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:26.647511   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:26.661021   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:26.661042   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:26.732233   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:26.732265   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:26.732286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:26.813104   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:26.813143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:29.361485   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:29.374578   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:29.374647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:29.409740   64287 cri.go:89] found id: ""
	I1009 20:21:29.409766   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.409774   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:29.409781   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:29.409826   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:29.443932   64287 cri.go:89] found id: ""
	I1009 20:21:29.443959   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.443970   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:29.443978   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:29.444070   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:29.485900   64287 cri.go:89] found id: ""
	I1009 20:21:29.485927   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.485935   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:29.485940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:29.485994   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:29.527976   64287 cri.go:89] found id: ""
	I1009 20:21:29.528002   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.528013   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:29.528021   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:29.528080   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:29.572186   64287 cri.go:89] found id: ""
	I1009 20:21:29.572214   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.572235   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:29.572243   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:29.572310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:29.612166   64287 cri.go:89] found id: ""
	I1009 20:21:29.612190   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.612200   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:29.612208   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:29.612267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:29.880828   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:32.380494   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:28.897535   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:31.398369   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:29.646269   64287 cri.go:89] found id: ""
	I1009 20:21:29.646294   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.646312   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:29.646319   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:29.646375   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:29.680624   64287 cri.go:89] found id: ""
	I1009 20:21:29.680649   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.680656   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:29.680663   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:29.680673   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:29.729251   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:29.729278   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:29.742746   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:29.742773   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:29.815128   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:29.815150   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:29.815164   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:29.893418   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:29.893448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.433532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:32.447090   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:32.447161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:32.482662   64287 cri.go:89] found id: ""
	I1009 20:21:32.482688   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.482696   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:32.482702   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:32.482755   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:32.521292   64287 cri.go:89] found id: ""
	I1009 20:21:32.521321   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.521329   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:32.521337   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:32.521393   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:32.555868   64287 cri.go:89] found id: ""
	I1009 20:21:32.555894   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.555901   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:32.555906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:32.555956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:32.593541   64287 cri.go:89] found id: ""
	I1009 20:21:32.593563   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.593570   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:32.593575   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:32.593632   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:32.627712   64287 cri.go:89] found id: ""
	I1009 20:21:32.627740   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.627751   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:32.627758   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:32.627816   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:32.660632   64287 cri.go:89] found id: ""
	I1009 20:21:32.660658   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.660669   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:32.660677   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:32.660733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:32.697709   64287 cri.go:89] found id: ""
	I1009 20:21:32.697737   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.697748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:32.697755   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:32.697810   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:32.734782   64287 cri.go:89] found id: ""
	I1009 20:21:32.734806   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.734816   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:32.734827   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:32.734840   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:32.809239   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:32.809271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.857109   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:32.857143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:32.915156   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:32.915185   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:32.929782   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:32.929813   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:32.996321   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:34.380798   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:36.880717   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:33.399188   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.899631   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.497013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:35.510645   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:35.510714   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:35.543840   64287 cri.go:89] found id: ""
	I1009 20:21:35.543869   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.543878   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:35.543883   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:35.543929   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:35.579206   64287 cri.go:89] found id: ""
	I1009 20:21:35.579235   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.579246   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:35.579254   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:35.579312   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:35.613362   64287 cri.go:89] found id: ""
	I1009 20:21:35.613393   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.613406   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:35.613414   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:35.613484   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:35.649553   64287 cri.go:89] found id: ""
	I1009 20:21:35.649584   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.649596   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:35.649605   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:35.649672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:35.688665   64287 cri.go:89] found id: ""
	I1009 20:21:35.688695   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.688706   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:35.688714   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:35.688771   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:35.725958   64287 cri.go:89] found id: ""
	I1009 20:21:35.725979   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.725987   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:35.725993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:35.726047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:35.758368   64287 cri.go:89] found id: ""
	I1009 20:21:35.758395   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.758405   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:35.758410   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:35.758455   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:35.790323   64287 cri.go:89] found id: ""
	I1009 20:21:35.790347   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.790357   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:35.790367   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:35.790380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:35.843721   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:35.843752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:35.858894   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:35.858915   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:35.934242   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:35.934261   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:35.934273   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:36.016029   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:36.016062   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.554219   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:38.567266   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:38.567339   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:38.606292   64287 cri.go:89] found id: ""
	I1009 20:21:38.606328   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.606338   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:38.606344   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:38.606396   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:38.638807   64287 cri.go:89] found id: ""
	I1009 20:21:38.638831   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.638841   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:38.638849   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:38.638907   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:38.677635   64287 cri.go:89] found id: ""
	I1009 20:21:38.677665   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.677674   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:38.677682   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:38.677740   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:38.714847   64287 cri.go:89] found id: ""
	I1009 20:21:38.714870   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.714878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:38.714886   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:38.714944   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:38.746460   64287 cri.go:89] found id: ""
	I1009 20:21:38.746487   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.746495   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:38.746501   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:38.746554   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:38.782027   64287 cri.go:89] found id: ""
	I1009 20:21:38.782055   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.782066   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:38.782073   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:38.782130   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:38.816859   64287 cri.go:89] found id: ""
	I1009 20:21:38.816885   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.816893   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:38.816899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:38.816961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:38.857159   64287 cri.go:89] found id: ""
	I1009 20:21:38.857195   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.857204   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:38.857212   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:38.857224   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:38.913209   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:38.913240   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:38.927593   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:38.927617   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:38.998178   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:38.998213   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:38.998226   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:39.080681   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:39.080716   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.882054   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.874981   64109 pod_ready.go:82] duration metric: took 4m0.000684397s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:40.875008   64109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:40.875024   64109 pod_ready.go:39] duration metric: took 4m13.532570346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:40.875056   64109 kubeadm.go:597] duration metric: took 4m22.188345085s to restartPrimaryControlPlane
	W1009 20:21:40.875130   64109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:40.875162   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:38.397606   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.398216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:42.398390   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:41.620092   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:41.633491   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:41.633564   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:41.671087   64287 cri.go:89] found id: ""
	I1009 20:21:41.671114   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.671123   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:41.671128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:41.671184   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:41.706940   64287 cri.go:89] found id: ""
	I1009 20:21:41.706966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.706976   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:41.706984   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:41.707036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:41.745612   64287 cri.go:89] found id: ""
	I1009 20:21:41.745637   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.745646   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:41.745651   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:41.745706   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:41.786857   64287 cri.go:89] found id: ""
	I1009 20:21:41.786884   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.786895   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:41.786904   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:41.786958   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:41.825005   64287 cri.go:89] found id: ""
	I1009 20:21:41.825030   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.825041   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:41.825053   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:41.825100   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:41.863089   64287 cri.go:89] found id: ""
	I1009 20:21:41.863111   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.863118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:41.863124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:41.863169   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:41.907937   64287 cri.go:89] found id: ""
	I1009 20:21:41.907966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.907980   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:41.907988   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:41.908047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:41.948189   64287 cri.go:89] found id: ""
	I1009 20:21:41.948219   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.948229   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:41.948243   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:41.948257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:41.993008   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:41.993038   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:42.045831   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:42.045864   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:42.060255   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:42.060280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:42.127657   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:42.127680   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:42.127696   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:44.398696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:46.399642   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:43.855161   63744 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.280119061s)
	I1009 20:21:43.855245   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:43.871587   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:43.881677   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:43.891625   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:43.891646   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:43.891689   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:43.901651   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:43.901705   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:43.911179   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:43.920389   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:43.920436   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:43.929812   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.938937   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:43.938989   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.948454   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:43.958881   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:43.958924   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:43.970036   63744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:44.024453   63744 kubeadm.go:310] W1009 20:21:44.000704    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.025829   63744 kubeadm.go:310] W1009 20:21:44.002227    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.142191   63744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:44.713209   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:44.725754   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:44.725825   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:44.760976   64287 cri.go:89] found id: ""
	I1009 20:21:44.760997   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.761004   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:44.761011   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:44.761053   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:44.796955   64287 cri.go:89] found id: ""
	I1009 20:21:44.796977   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.796985   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:44.796991   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:44.797036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:44.832558   64287 cri.go:89] found id: ""
	I1009 20:21:44.832590   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.832601   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:44.832608   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:44.832667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:44.867869   64287 cri.go:89] found id: ""
	I1009 20:21:44.867898   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.867908   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:44.867916   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:44.867966   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:44.901395   64287 cri.go:89] found id: ""
	I1009 20:21:44.901423   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.901434   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:44.901442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:44.901505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:44.939276   64287 cri.go:89] found id: ""
	I1009 20:21:44.939310   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.939323   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:44.939337   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:44.939399   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:44.973692   64287 cri.go:89] found id: ""
	I1009 20:21:44.973719   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.973728   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:44.973734   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:44.973782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:45.007406   64287 cri.go:89] found id: ""
	I1009 20:21:45.007436   64287 logs.go:282] 0 containers: []
	W1009 20:21:45.007446   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:45.007457   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:45.007472   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:45.062199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:45.062233   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:45.075739   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:45.075763   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:45.147623   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:45.147639   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:45.147654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:45.229252   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:45.229286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:47.777208   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:47.794054   64287 kubeadm.go:597] duration metric: took 4m2.743382732s to restartPrimaryControlPlane
	W1009 20:21:47.794132   64287 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:47.794159   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:48.789863   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:48.804981   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:48.815981   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:48.826318   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:48.826340   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:48.826390   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:48.838918   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:48.838976   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:48.851635   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:48.864173   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:48.864237   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:48.874606   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.885036   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:48.885097   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.894870   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:48.904993   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:48.905040   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:48.915393   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:49.145081   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:52.033314   63744 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:21:52.033383   63744 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:21:52.033489   63744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:21:52.033625   63744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:21:52.033705   63744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:21:52.033799   63744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:21:52.035555   63744 out.go:235]   - Generating certificates and keys ...
	I1009 20:21:52.035638   63744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:21:52.035737   63744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:21:52.035861   63744 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:21:52.035951   63744 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:21:52.036043   63744 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:21:52.036135   63744 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:21:52.036233   63744 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:21:52.036325   63744 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:21:52.036431   63744 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:21:52.036584   63744 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:21:52.036656   63744 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:21:52.036737   63744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:21:52.036831   63744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:21:52.036914   63744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:21:52.036985   63744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:21:52.037077   63744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:21:52.037157   63744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:21:52.037280   63744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:21:52.037372   63744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:21:52.038777   63744 out.go:235]   - Booting up control plane ...
	I1009 20:21:52.038872   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:21:52.038995   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:21:52.039101   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:21:52.039242   63744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:21:52.039338   63744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:21:52.039393   63744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:21:52.039593   63744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:21:52.039746   63744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:21:52.039813   63744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005827851s
	I1009 20:21:52.039917   63744 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:21:52.039996   63744 kubeadm.go:310] [api-check] The API server is healthy after 4.502512954s
	I1009 20:21:52.040127   63744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:21:52.040319   63744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:21:52.040402   63744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:21:52.040606   63744 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-503330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:21:52.040684   63744 kubeadm.go:310] [bootstrap-token] Using token: 69fwjj.t1glswhsta5w4zx2
	I1009 20:21:52.042352   63744 out.go:235]   - Configuring RBAC rules ...
	I1009 20:21:52.042456   63744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:21:52.042526   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:21:52.042664   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:21:52.042773   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:21:52.042868   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:21:52.042948   63744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:21:52.043119   63744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:21:52.043184   63744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:21:52.043250   63744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:21:52.043258   63744 kubeadm.go:310] 
	I1009 20:21:52.043360   63744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:21:52.043377   63744 kubeadm.go:310] 
	I1009 20:21:52.043504   63744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:21:52.043516   63744 kubeadm.go:310] 
	I1009 20:21:52.043554   63744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:21:52.043639   63744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:21:52.043711   63744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:21:52.043721   63744 kubeadm.go:310] 
	I1009 20:21:52.043792   63744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:21:52.043800   63744 kubeadm.go:310] 
	I1009 20:21:52.043838   63744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:21:52.043844   63744 kubeadm.go:310] 
	I1009 20:21:52.043909   63744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:21:52.044021   63744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:21:52.044108   63744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:21:52.044117   63744 kubeadm.go:310] 
	I1009 20:21:52.044225   63744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:21:52.044350   63744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:21:52.044365   63744 kubeadm.go:310] 
	I1009 20:21:52.044462   63744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044591   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:21:52.044619   63744 kubeadm.go:310] 	--control-plane 
	I1009 20:21:52.044624   63744 kubeadm.go:310] 
	I1009 20:21:52.044732   63744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:21:52.044739   63744 kubeadm.go:310] 
	I1009 20:21:52.044842   63744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044956   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:21:52.044967   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:21:52.044973   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:21:52.047342   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:21:48.899752   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:51.398734   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:52.048508   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:21:52.060338   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:21:52.079526   63744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:21:52.079580   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.079669   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-503330 minikube.k8s.io/updated_at=2024_10_09T20_21_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=embed-certs-503330 minikube.k8s.io/primary=true
	I1009 20:21:52.296281   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.296296   63744 ops.go:34] apiserver oom_adj: -16
	I1009 20:21:52.796429   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.296570   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.797269   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.297261   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.797049   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.297194   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.796896   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.296658   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.796494   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.904248   63744 kubeadm.go:1113] duration metric: took 4.824720684s to wait for elevateKubeSystemPrivileges
	I1009 20:21:56.904284   63744 kubeadm.go:394] duration metric: took 5m1.847540023s to StartCluster
	I1009 20:21:56.904302   63744 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.904390   63744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:21:56.906918   63744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.907263   63744 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:56.907349   63744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:56.907451   63744 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-503330"
	I1009 20:21:56.907487   63744 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-503330"
	I1009 20:21:56.907486   63744 addons.go:69] Setting default-storageclass=true in profile "embed-certs-503330"
	W1009 20:21:56.907496   63744 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:21:56.907502   63744 addons.go:69] Setting metrics-server=true in profile "embed-certs-503330"
	I1009 20:21:56.907527   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907540   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:21:56.907529   63744 addons.go:234] Setting addon metrics-server=true in "embed-certs-503330"
	W1009 20:21:56.907616   63744 addons.go:243] addon metrics-server should already be in state true
	I1009 20:21:56.907642   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907508   63744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-503330"
	I1009 20:21:56.907976   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908018   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908038   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908061   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908072   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908105   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.909166   63744 out.go:177] * Verifying Kubernetes components...
	I1009 20:21:56.910945   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:56.924607   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1009 20:21:56.925089   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.925624   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.925643   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.926009   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.926194   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.927999   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1009 20:21:56.928182   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1009 20:21:56.929496   63744 addons.go:234] Setting addon default-storageclass=true in "embed-certs-503330"
	W1009 20:21:56.929513   63744 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:21:56.929533   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.929779   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.929804   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.930111   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930148   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930590   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930607   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930727   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930742   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930950   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931022   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931541   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.931583   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.932246   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.932292   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.945160   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I1009 20:21:56.945657   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.946102   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.946128   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.946469   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.947002   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.947044   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.951951   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I1009 20:21:56.952409   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.952851   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1009 20:21:56.953051   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953068   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.953331   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.953407   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.953561   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.953830   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953854   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.954204   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.954381   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.956314   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.956515   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.958947   63744 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:21:56.959026   63744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:53.898455   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:55.898680   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:57.899675   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:56.961002   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:21:56.961019   63744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:21:56.961036   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.961188   63744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.961206   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:56.961219   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.964087   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1009 20:21:56.964490   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.964644   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965040   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965298   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965511   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965539   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965577   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965600   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965876   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.965901   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.965901   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965958   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966041   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966083   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.966324   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.967052   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.967288   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.968690   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.968865   63744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.968880   63744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:56.968902   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.971293   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971661   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.971682   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971807   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.971975   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.972115   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.972249   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:57.140847   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:57.160702   63744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172751   63744 node_ready.go:49] node "embed-certs-503330" has status "Ready":"True"
	I1009 20:21:57.172781   63744 node_ready.go:38] duration metric: took 12.05112ms for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172794   63744 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:57.181089   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:21:57.242001   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:57.263153   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:21:57.263173   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:21:57.302934   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:21:57.302962   63744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:21:57.335796   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.335822   63744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:21:57.361537   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.418449   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:57.903919   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.903945   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904232   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904252   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:57.904261   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.904269   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904289   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:57.904560   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904578   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131399   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131433   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131434   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131451   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131717   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131742   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131750   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131762   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131792   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131796   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131847   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131861   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131869   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131972   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131986   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133342   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.133353   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.133363   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133372   63744 addons.go:475] Verifying addon metrics-server=true in "embed-certs-503330"
	I1009 20:21:58.148066   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.148090   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.148302   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.148304   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.148331   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.149874   63744 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1009 20:21:58.151249   63744 addons.go:510] duration metric: took 1.243909023s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1009 20:22:00.398702   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:02.898157   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:59.187137   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:01.686294   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:03.687302   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:04.187813   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:04.187838   63744 pod_ready.go:82] duration metric: took 7.006724226s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:04.187847   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693964   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.693989   63744 pod_ready.go:82] duration metric: took 1.506136012s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693999   63744 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698244   63744 pod_ready.go:93] pod "etcd-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.698263   63744 pod_ready.go:82] duration metric: took 4.258915ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698272   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702503   63744 pod_ready.go:93] pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.702523   63744 pod_ready.go:82] duration metric: took 4.24469ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702534   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706794   63744 pod_ready.go:93] pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.706814   63744 pod_ready.go:82] duration metric: took 4.272023ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706824   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785041   63744 pod_ready.go:93] pod "kube-proxy-k4sqz" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.785063   63744 pod_ready.go:82] duration metric: took 78.232276ms for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785072   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185082   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:06.185107   63744 pod_ready.go:82] duration metric: took 400.026614ms for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185118   63744 pod_ready.go:39] duration metric: took 9.012311475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:06.185134   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:06.185190   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:06.200274   63744 api_server.go:72] duration metric: took 9.292974134s to wait for apiserver process to appear ...
	I1009 20:22:06.200300   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:06.200319   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:22:06.204606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:22:06.205489   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:06.205507   63744 api_server.go:131] duration metric: took 5.200899ms to wait for apiserver health ...
	I1009 20:22:06.205515   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:06.387526   63744 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:06.387560   63744 system_pods.go:61] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.387566   63744 system_pods.go:61] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.387569   63744 system_pods.go:61] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.387572   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.387576   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.387580   63744 system_pods.go:61] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.387584   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.387589   63744 system_pods.go:61] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.387595   63744 system_pods.go:61] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.387604   63744 system_pods.go:74] duration metric: took 182.083801ms to wait for pod list to return data ...
	I1009 20:22:06.387614   63744 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:06.585261   63744 default_sa.go:45] found service account: "default"
	I1009 20:22:06.585283   63744 default_sa.go:55] duration metric: took 197.662514ms for default service account to be created ...
	I1009 20:22:06.585292   63744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:06.788380   63744 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:06.788405   63744 system_pods.go:89] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.788410   63744 system_pods.go:89] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.788414   63744 system_pods.go:89] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.788418   63744 system_pods.go:89] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.788421   63744 system_pods.go:89] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.788425   63744 system_pods.go:89] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.788428   63744 system_pods.go:89] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.788433   63744 system_pods.go:89] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.788437   63744 system_pods.go:89] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.788445   63744 system_pods.go:126] duration metric: took 203.147541ms to wait for k8s-apps to be running ...
	I1009 20:22:06.788454   63744 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:06.788493   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:06.808681   63744 system_svc.go:56] duration metric: took 20.217422ms WaitForService to wait for kubelet
	I1009 20:22:06.808710   63744 kubeadm.go:582] duration metric: took 9.901411942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:06.808733   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:06.984902   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:06.984932   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:06.984945   63744 node_conditions.go:105] duration metric: took 176.206313ms to run NodePressure ...
	I1009 20:22:06.984958   63744 start.go:241] waiting for startup goroutines ...
	I1009 20:22:06.984968   63744 start.go:246] waiting for cluster config update ...
	I1009 20:22:06.984981   63744 start.go:255] writing updated cluster config ...
	I1009 20:22:06.985286   63744 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:07.038935   63744 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:07.040555   63744 out.go:177] * Done! kubectl is now configured to use "embed-certs-503330" cluster and "default" namespace by default
	I1009 20:22:07.095426   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.220236459s)
	I1009 20:22:07.095500   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:07.112458   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:22:07.126942   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:22:07.140284   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:22:07.140304   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:22:07.140349   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:22:07.150051   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:22:07.150089   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:22:07.159508   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:22:07.169670   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:22:07.169724   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:22:07.179378   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.189534   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:22:07.189590   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.198752   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:22:07.207878   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:22:07.207922   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:22:07.217131   64109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:22:07.272837   64109 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:22:07.272983   64109 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:22:07.390966   64109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:22:07.391157   64109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:22:07.391298   64109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:22:07.402064   64109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:22:07.404170   64109 out.go:235]   - Generating certificates and keys ...
	I1009 20:22:07.404277   64109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:22:07.404377   64109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:22:07.404500   64109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:22:07.404594   64109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:22:07.404709   64109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:22:07.404798   64109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:22:07.404891   64109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:22:07.404980   64109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:22:07.405087   64109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:22:07.405184   64109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:22:07.405257   64109 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:22:07.405339   64109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:22:04.898623   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:06.899217   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:07.573252   64109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:22:07.929073   64109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:22:08.151802   64109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:22:08.220927   64109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:22:08.351546   64109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:22:08.352048   64109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:22:08.354486   64109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:22:08.356298   64109 out.go:235]   - Booting up control plane ...
	I1009 20:22:08.356416   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:22:08.356497   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:22:08.356564   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:22:08.376381   64109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:22:08.383479   64109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:22:08.383861   64109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:22:08.515158   64109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:22:08.515282   64109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:22:09.516371   64109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001248976s
	I1009 20:22:09.516460   64109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:22:09.398667   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:11.898547   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:14.518560   64109 kubeadm.go:310] [api-check] The API server is healthy after 5.002267352s
	I1009 20:22:14.535812   64109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:22:14.551918   64109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:22:14.575035   64109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:22:14.575281   64109 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-733270 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:22:14.589604   64109 kubeadm.go:310] [bootstrap-token] Using token: q60nq5.9zsgiaeid5aito18
	I1009 20:22:14.590971   64109 out.go:235]   - Configuring RBAC rules ...
	I1009 20:22:14.591128   64109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:22:14.597327   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:22:14.605584   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:22:14.608650   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:22:14.614771   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:22:14.618089   64109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:22:14.929271   64109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:22:15.378546   64109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:22:15.929242   64109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:22:15.930222   64109 kubeadm.go:310] 
	I1009 20:22:15.930305   64109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:22:15.930314   64109 kubeadm.go:310] 
	I1009 20:22:15.930395   64109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:22:15.930423   64109 kubeadm.go:310] 
	I1009 20:22:15.930468   64109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:22:15.930569   64109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:22:15.930635   64109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:22:15.930643   64109 kubeadm.go:310] 
	I1009 20:22:15.930711   64109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:22:15.930718   64109 kubeadm.go:310] 
	I1009 20:22:15.930758   64109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:22:15.930764   64109 kubeadm.go:310] 
	I1009 20:22:15.930807   64109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:22:15.930874   64109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:22:15.930933   64109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:22:15.930939   64109 kubeadm.go:310] 
	I1009 20:22:15.931013   64109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:22:15.931138   64109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:22:15.931150   64109 kubeadm.go:310] 
	I1009 20:22:15.931258   64109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931411   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:22:15.931450   64109 kubeadm.go:310] 	--control-plane 
	I1009 20:22:15.931460   64109 kubeadm.go:310] 
	I1009 20:22:15.931560   64109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:22:15.931569   64109 kubeadm.go:310] 
	I1009 20:22:15.931668   64109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931824   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:22:15.933191   64109 kubeadm.go:310] W1009 20:22:07.220393    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933602   64109 kubeadm.go:310] W1009 20:22:07.223065    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933757   64109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:22:15.933786   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:22:15.933800   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:22:15.935449   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:22:15.936759   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:22:15.947648   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:22:15.966343   64109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:22:15.966422   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:15.966483   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-733270 minikube.k8s.io/updated_at=2024_10_09T20_22_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=default-k8s-diff-port-733270 minikube.k8s.io/primary=true
	I1009 20:22:16.186232   64109 ops.go:34] apiserver oom_adj: -16
	I1009 20:22:16.186379   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:16.686824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:17.187316   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:14.398119   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:16.399791   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:17.687381   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.186824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.687500   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.187331   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.687194   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.767575   64109 kubeadm.go:1113] duration metric: took 3.801217416s to wait for elevateKubeSystemPrivileges
	I1009 20:22:19.767611   64109 kubeadm.go:394] duration metric: took 5m1.132732036s to StartCluster
	I1009 20:22:19.767631   64109 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.767719   64109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:22:19.769461   64109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.769695   64109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:22:19.769758   64109 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:22:19.769856   64109 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769884   64109 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-733270"
	I1009 20:22:19.769881   64109 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769894   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:22:19.769908   64109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733270"
	W1009 20:22:19.769897   64109 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:22:19.769970   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.769892   64109 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.770056   64109 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.770069   64109 addons.go:243] addon metrics-server should already be in state true
	I1009 20:22:19.770116   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.770324   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770356   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770364   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770392   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770486   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770522   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.771540   64109 out.go:177] * Verifying Kubernetes components...
	I1009 20:22:19.772979   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:22:19.785692   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I1009 20:22:19.785792   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I1009 20:22:19.786095   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786204   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786608   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786629   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786759   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786776   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786948   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.787422   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.787449   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.787843   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.788015   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.788974   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34217
	I1009 20:22:19.789282   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.789751   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.789772   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.791379   64109 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.791400   64109 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:22:19.791428   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.791601   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.791796   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.791834   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.792113   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.792147   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.806661   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1009 20:22:19.807178   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1009 20:22:19.807283   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807700   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807966   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.807989   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808200   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.808223   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808407   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.808629   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808811   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.810504   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810671   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1009 20:22:19.811047   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.811579   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.811602   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.811962   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.812375   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.812404   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.812666   64109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:22:19.812673   64109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:22:19.814145   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:22:19.814160   64109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:22:19.814173   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.814293   64109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:19.814308   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:22:19.814324   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.817244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818718   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.818744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818881   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.818956   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819037   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819240   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.819401   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.819677   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.819697   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.819713   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819831   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819990   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.820176   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.831920   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1009 20:22:19.832278   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.832725   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.832757   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.833093   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.833271   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.834841   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.835042   64109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:19.835074   64109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:22:19.835094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.837916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.838651   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838759   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.838927   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.839075   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.839216   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.968622   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:22:19.988987   64109 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005886   64109 node_ready.go:49] node "default-k8s-diff-port-733270" has status "Ready":"True"
	I1009 20:22:20.005909   64109 node_ready.go:38] duration metric: took 16.891882ms for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005920   64109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:20.015076   64109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:20.072480   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:22:20.072517   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:22:20.089167   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:20.101256   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:20.128261   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:22:20.128310   64109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:22:20.166749   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.166772   64109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:22:20.250822   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.802064   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802142   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802449   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802462   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802465   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802471   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802479   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802482   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802490   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802503   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.804339   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804345   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804381   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.804403   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804413   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804426   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.820127   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.820148   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.820509   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.820526   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.820558   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.348946   64109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.098079149s)
	I1009 20:22:21.349009   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349024   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349347   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349396   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349404   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349420   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349428   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349689   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349748   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349774   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349788   64109 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-733270"
	I1009 20:22:21.351765   64109 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1009 20:22:21.352876   64109 addons.go:510] duration metric: took 1.58312679s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1009 20:22:22.021876   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:18.401861   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:20.899295   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:24.521853   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.021730   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:23.399283   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:25.897649   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.897899   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:28.021952   64109 pod_ready.go:93] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.021974   64109 pod_ready.go:82] duration metric: took 8.006873591s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.021983   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026148   64109 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.026167   64109 pod_ready.go:82] duration metric: took 4.178272ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026176   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029955   64109 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.029976   64109 pod_ready.go:82] duration metric: took 3.792606ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029986   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033674   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.033690   64109 pod_ready.go:82] duration metric: took 3.698391ms for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033697   64109 pod_ready.go:39] duration metric: took 8.027766695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:28.033709   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:28.033754   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:28.057802   64109 api_server.go:72] duration metric: took 8.288077751s to wait for apiserver process to appear ...
	I1009 20:22:28.057830   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:28.057850   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:22:28.069876   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:22:28.071652   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:28.071676   64109 api_server.go:131] duration metric: took 13.838153ms to wait for apiserver health ...
	I1009 20:22:28.071684   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:28.083482   64109 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:28.083504   64109 system_pods.go:61] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.083509   64109 system_pods.go:61] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.083513   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.083516   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.083520   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.083523   64109 system_pods.go:61] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.083526   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.083531   64109 system_pods.go:61] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.083535   64109 system_pods.go:61] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.083542   64109 system_pods.go:74] duration metric: took 11.853134ms to wait for pod list to return data ...
	I1009 20:22:28.083548   64109 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:28.086146   64109 default_sa.go:45] found service account: "default"
	I1009 20:22:28.086165   64109 default_sa.go:55] duration metric: took 2.611433ms for default service account to be created ...
	I1009 20:22:28.086173   64109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:28.223233   64109 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:28.223260   64109 system_pods.go:89] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.223266   64109 system_pods.go:89] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.223270   64109 system_pods.go:89] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.223274   64109 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.223278   64109 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.223281   64109 system_pods.go:89] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.223285   64109 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.223291   64109 system_pods.go:89] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.223295   64109 system_pods.go:89] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.223303   64109 system_pods.go:126] duration metric: took 137.124429ms to wait for k8s-apps to be running ...
	I1009 20:22:28.223310   64109 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:28.223352   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:28.239300   64109 system_svc.go:56] duration metric: took 15.983195ms WaitForService to wait for kubelet
	I1009 20:22:28.239324   64109 kubeadm.go:582] duration metric: took 8.469605426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:28.239341   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:28.419917   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:28.419940   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:28.419951   64109 node_conditions.go:105] duration metric: took 180.606696ms to run NodePressure ...
	I1009 20:22:28.419962   64109 start.go:241] waiting for startup goroutines ...
	I1009 20:22:28.419969   64109 start.go:246] waiting for cluster config update ...
	I1009 20:22:28.419978   64109 start.go:255] writing updated cluster config ...
	I1009 20:22:28.420224   64109 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:28.467253   64109 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:28.469239   64109 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-733270" cluster and "default" namespace by default
	I1009 20:22:29.898528   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:31.897863   63427 pod_ready.go:82] duration metric: took 4m0.005763954s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:22:31.897884   63427 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 20:22:31.897892   63427 pod_ready.go:39] duration metric: took 4m2.806165062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:31.897906   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:31.897930   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:31.897972   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:31.945643   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:31.945667   63427 cri.go:89] found id: ""
	I1009 20:22:31.945677   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:31.945720   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.949923   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:31.950018   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:31.989365   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:31.989391   63427 cri.go:89] found id: ""
	I1009 20:22:31.989401   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:31.989451   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.993865   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:31.993926   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:32.030658   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.030678   63427 cri.go:89] found id: ""
	I1009 20:22:32.030685   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:32.030731   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.034587   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:32.034647   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:32.078482   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.078508   63427 cri.go:89] found id: ""
	I1009 20:22:32.078516   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:32.078570   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.082565   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:32.082626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:32.118355   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.118379   63427 cri.go:89] found id: ""
	I1009 20:22:32.118388   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:32.118444   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.123110   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:32.123170   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:32.163052   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.163077   63427 cri.go:89] found id: ""
	I1009 20:22:32.163085   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:32.163137   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.167085   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:32.167146   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:32.201126   63427 cri.go:89] found id: ""
	I1009 20:22:32.201149   63427 logs.go:282] 0 containers: []
	W1009 20:22:32.201156   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:32.201161   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:32.201217   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:32.242235   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.242259   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.242265   63427 cri.go:89] found id: ""
	I1009 20:22:32.242274   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:32.242337   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.247127   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.250692   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:32.250712   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.301343   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:32.301368   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:32.347256   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:32.347283   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:32.485223   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:32.485263   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.530013   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:32.530054   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:32.580422   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:32.580447   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:32.625202   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:32.625237   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.664203   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:32.664230   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.701753   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:32.701782   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.741584   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:32.741610   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.779976   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:32.780003   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:32.848844   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:32.848875   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:32.871387   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:32.871416   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:35.836255   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:35.853510   63427 api_server.go:72] duration metric: took 4m14.501873287s to wait for apiserver process to appear ...
	I1009 20:22:35.853541   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:35.853583   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:35.853626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:35.889199   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:35.889228   63427 cri.go:89] found id: ""
	I1009 20:22:35.889237   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:35.889299   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.893644   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:35.893706   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:35.934151   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:35.934178   63427 cri.go:89] found id: ""
	I1009 20:22:35.934188   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:35.934244   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.938561   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:35.938618   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:35.974555   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:35.974579   63427 cri.go:89] found id: ""
	I1009 20:22:35.974588   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:35.974639   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.978468   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:35.978514   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:36.014292   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.014316   63427 cri.go:89] found id: ""
	I1009 20:22:36.014324   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:36.014366   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.018618   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:36.018672   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:36.059334   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.059366   63427 cri.go:89] found id: ""
	I1009 20:22:36.059377   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:36.059436   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.063552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:36.063612   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:36.098384   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.098404   63427 cri.go:89] found id: ""
	I1009 20:22:36.098413   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:36.098464   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.102428   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:36.102490   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:36.140422   63427 cri.go:89] found id: ""
	I1009 20:22:36.140451   63427 logs.go:282] 0 containers: []
	W1009 20:22:36.140461   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:36.140467   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:36.140524   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:36.178576   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.178600   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.178604   63427 cri.go:89] found id: ""
	I1009 20:22:36.178610   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:36.178662   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.183208   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.186971   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:36.186994   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.222365   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:36.222389   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:36.652499   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:36.652533   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:36.700493   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:36.700523   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:36.715630   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:36.715657   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:36.757738   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:36.757766   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:36.793469   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:36.793491   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.833374   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:36.833400   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.894545   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:36.894579   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.932407   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:36.932441   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.969165   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:36.969198   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:37.039100   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:37.039138   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:37.141855   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:37.141889   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.701118   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:22:39.705369   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:22:39.706731   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:39.706750   63427 api_server.go:131] duration metric: took 3.853202912s to wait for apiserver health ...
	I1009 20:22:39.706757   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:39.706777   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:39.706821   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:39.745203   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.745227   63427 cri.go:89] found id: ""
	I1009 20:22:39.745234   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:39.745277   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.749708   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:39.749768   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:39.786606   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:39.786629   63427 cri.go:89] found id: ""
	I1009 20:22:39.786637   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:39.786681   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.790981   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:39.791036   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:39.826615   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:39.826635   63427 cri.go:89] found id: ""
	I1009 20:22:39.826642   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:39.826710   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.831189   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:39.831260   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:39.867300   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:39.867320   63427 cri.go:89] found id: ""
	I1009 20:22:39.867327   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:39.867373   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.871552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:39.871606   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:39.905493   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:39.905513   63427 cri.go:89] found id: ""
	I1009 20:22:39.905521   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:39.905565   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.910653   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:39.910704   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:39.952830   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:39.952848   63427 cri.go:89] found id: ""
	I1009 20:22:39.952856   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:39.952901   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.957366   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:39.957434   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:39.993913   63427 cri.go:89] found id: ""
	I1009 20:22:39.993936   63427 logs.go:282] 0 containers: []
	W1009 20:22:39.993943   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:39.993949   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:39.993993   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:40.036654   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.036680   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.036685   63427 cri.go:89] found id: ""
	I1009 20:22:40.036694   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:40.036752   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.041168   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.045050   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:40.045073   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:40.059862   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:40.059890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:40.098698   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:40.098725   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:40.136003   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:40.136028   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:40.192473   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:40.192499   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.228548   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:40.228575   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:40.634922   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:40.634956   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:40.701278   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:40.701313   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:40.813881   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:40.813915   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:40.874590   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:40.874619   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:40.916558   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:40.916585   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:40.959294   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:40.959323   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.997037   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:40.997065   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:43.555901   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:43.555933   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.555941   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.555947   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.555953   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.555957   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.555962   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.555973   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.555982   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.555992   63427 system_pods.go:74] duration metric: took 3.849229039s to wait for pod list to return data ...
	I1009 20:22:43.556003   63427 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:43.558563   63427 default_sa.go:45] found service account: "default"
	I1009 20:22:43.558582   63427 default_sa.go:55] duration metric: took 2.571282ms for default service account to be created ...
	I1009 20:22:43.558590   63427 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:43.563017   63427 system_pods.go:86] 8 kube-system pods found
	I1009 20:22:43.563036   63427 system_pods.go:89] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.563041   63427 system_pods.go:89] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.563045   63427 system_pods.go:89] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.563049   63427 system_pods.go:89] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.563052   63427 system_pods.go:89] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.563056   63427 system_pods.go:89] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.563074   63427 system_pods.go:89] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.563082   63427 system_pods.go:89] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.563091   63427 system_pods.go:126] duration metric: took 4.493122ms to wait for k8s-apps to be running ...
	I1009 20:22:43.563101   63427 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:43.563148   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:43.579410   63427 system_svc.go:56] duration metric: took 16.301009ms WaitForService to wait for kubelet
	I1009 20:22:43.579435   63427 kubeadm.go:582] duration metric: took 4m22.227803615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:43.579456   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:43.582061   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:43.582083   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:43.582095   63427 node_conditions.go:105] duration metric: took 2.633714ms to run NodePressure ...
	I1009 20:22:43.582108   63427 start.go:241] waiting for startup goroutines ...
	I1009 20:22:43.582118   63427 start.go:246] waiting for cluster config update ...
	I1009 20:22:43.582137   63427 start.go:255] writing updated cluster config ...
	I1009 20:22:43.582415   63427 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:43.628249   63427 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:43.630230   63427 out.go:177] * Done! kubectl is now configured to use "no-preload-480205" cluster and "default" namespace by default
	I1009 20:23:45.402502   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:23:45.402618   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:23:45.404210   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:45.404308   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:45.404415   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:45.404554   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:45.404699   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:45.404776   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:45.406561   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:45.406656   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:45.406713   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:45.406832   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:45.406929   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:45.407025   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:45.407132   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:45.407247   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:45.407350   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:45.407466   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:45.407586   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:45.407659   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:45.407756   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:45.407850   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:45.407937   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:45.408016   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:45.408074   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:45.408202   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:45.408335   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:45.408407   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:45.408510   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:45.410040   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:45.410141   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:45.410231   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:45.410330   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:45.410409   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:45.410546   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:23:45.410589   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:23:45.410653   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.410810   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.410872   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411059   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411164   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411367   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411428   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411606   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411674   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411825   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411832   64287 kubeadm.go:310] 
	I1009 20:23:45.411865   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:23:45.411909   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:23:45.411928   64287 kubeadm.go:310] 
	I1009 20:23:45.411974   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:23:45.412018   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:23:45.412138   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:23:45.412155   64287 kubeadm.go:310] 
	I1009 20:23:45.412300   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:23:45.412344   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:23:45.412393   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:23:45.412400   64287 kubeadm.go:310] 
	I1009 20:23:45.412516   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:23:45.412618   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:23:45.412631   64287 kubeadm.go:310] 
	I1009 20:23:45.412764   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:23:45.412885   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:23:45.412996   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:23:45.413059   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:23:45.413078   64287 kubeadm.go:310] 
	W1009 20:23:45.413176   64287 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:23:45.413219   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:23:45.881931   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:23:45.897391   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:23:45.907598   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:23:45.907621   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:23:45.907668   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:23:45.917540   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:23:45.917585   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:23:45.927278   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:23:45.937054   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:23:45.937109   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:23:45.946544   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.956863   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:23:45.956901   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.966184   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:23:45.975335   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:23:45.975385   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:23:45.984552   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:23:46.063271   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:46.063380   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:46.213340   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:46.213511   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:46.213652   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:46.388334   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:46.390196   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:46.390303   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:46.390384   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:46.390499   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:46.390606   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:46.390710   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:46.390799   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:46.390899   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:46.390975   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:46.391097   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:46.391196   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:46.391268   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:46.391355   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:46.513116   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:46.906952   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:47.053715   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:47.184809   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:47.207139   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:47.208338   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:47.208424   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:47.362764   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:47.364703   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:47.364823   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:47.377925   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:47.379842   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:47.380533   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:47.382819   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:24:27.385438   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:24:27.385546   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:27.385726   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:32.386071   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:32.386268   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:42.386802   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:42.386979   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:02.388082   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:02.388300   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.388787   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:42.389021   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.389080   64287 kubeadm.go:310] 
	I1009 20:25:42.389329   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:25:42.389524   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:25:42.389545   64287 kubeadm.go:310] 
	I1009 20:25:42.389625   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:25:42.389680   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:25:42.389832   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:25:42.389846   64287 kubeadm.go:310] 
	I1009 20:25:42.389963   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:25:42.390019   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:25:42.390066   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:25:42.390081   64287 kubeadm.go:310] 
	I1009 20:25:42.390201   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:25:42.390312   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:25:42.390321   64287 kubeadm.go:310] 
	I1009 20:25:42.390438   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:25:42.390550   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:25:42.390671   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:25:42.390779   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:25:42.390791   64287 kubeadm.go:310] 
	I1009 20:25:42.391382   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:25:42.391507   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:25:42.391606   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:25:42.391673   64287 kubeadm.go:394] duration metric: took 7m57.392748571s to StartCluster
	I1009 20:25:42.391719   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:25:42.391785   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:25:42.439581   64287 cri.go:89] found id: ""
	I1009 20:25:42.439610   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.439621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:25:42.439628   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:25:42.439695   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:25:42.476205   64287 cri.go:89] found id: ""
	I1009 20:25:42.476231   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.476238   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:25:42.476243   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:25:42.476297   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:25:42.528317   64287 cri.go:89] found id: ""
	I1009 20:25:42.528342   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.528350   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:25:42.528356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:25:42.528413   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:25:42.564857   64287 cri.go:89] found id: ""
	I1009 20:25:42.564885   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.564893   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:25:42.564899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:25:42.564956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:25:42.600053   64287 cri.go:89] found id: ""
	I1009 20:25:42.600081   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.600088   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:25:42.600094   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:25:42.600146   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:25:42.636997   64287 cri.go:89] found id: ""
	I1009 20:25:42.637026   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.637034   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:25:42.637047   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:25:42.637107   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:25:42.672228   64287 cri.go:89] found id: ""
	I1009 20:25:42.672255   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.672266   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:25:42.672273   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:25:42.672331   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:25:42.711696   64287 cri.go:89] found id: ""
	I1009 20:25:42.711727   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.711737   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:25:42.711749   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:25:42.711764   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:25:42.764839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:25:42.764876   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:25:42.778484   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:25:42.778512   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:25:42.864830   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:25:42.864859   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:25:42.864874   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:25:42.975355   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:25:42.975389   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:25:43.015247   64287 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:25:43.015307   64287 out.go:270] * 
	W1009 20:25:43.015375   64287 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.015392   64287 out.go:270] * 
	W1009 20:25:43.016664   64287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:25:43.020135   64287 out.go:201] 
	W1009 20:25:43.021388   64287 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.021427   64287 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:25:43.021453   64287 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:25:43.022804   64287 out.go:201] 
	
	
	==> CRI-O <==
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.783989651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505544783969727,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da338c38-3a01-4f89-8416-bb1320482eaf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.784583554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aad999a8-e906-484a-8692-caa759981139 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.784645126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aad999a8-e906-484a-8692-caa759981139 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.784677682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aad999a8-e906-484a-8692-caa759981139 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.822490434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57850589-35cd-4bcb-9280-70aa0eb327af name=/runtime.v1.RuntimeService/Version
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.822605650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57850589-35cd-4bcb-9280-70aa0eb327af name=/runtime.v1.RuntimeService/Version
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.823917426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6587dc1a-2f14-4ad6-8284-7a3206220057 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.824562255Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505544824528635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6587dc1a-2f14-4ad6-8284-7a3206220057 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.825122352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92e5db62-4c30-42f0-8d3e-64be08db70b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.825253557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92e5db62-4c30-42f0-8d3e-64be08db70b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.825304406Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92e5db62-4c30-42f0-8d3e-64be08db70b3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.862599408Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a59fa0a-f651-4249-8f02-77ddc070faa3 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.862715890Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a59fa0a-f651-4249-8f02-77ddc070faa3 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.864173160Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=86725930-810d-48fc-896d-9cede382c6c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.864728225Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505544864699439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86725930-810d-48fc-896d-9cede382c6c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.865283560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdf4c5ab-0860-43dd-81df-77fb38adf797 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.865379174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdf4c5ab-0860-43dd-81df-77fb38adf797 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.865427271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cdf4c5ab-0860-43dd-81df-77fb38adf797 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.905843628Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=111ddf6d-b531-45a7-be76-53ca7025f650 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.905915899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=111ddf6d-b531-45a7-be76-53ca7025f650 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.907422367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3c0ea6e-2e2e-440f-8daa-56b31787427a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.907852713Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505544907818186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3c0ea6e-2e2e-440f-8daa-56b31787427a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.908465230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b3c1b79-0bdc-48ad-9683-d13388f2964a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.908516359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b3c1b79-0bdc-48ad-9683-d13388f2964a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:25:44 old-k8s-version-169021 crio[636]: time="2024-10-09 20:25:44.908547692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5b3c1b79-0bdc-48ad-9683-d13388f2964a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 20:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051476] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041758] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.042560] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.485695] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.304560] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.057777] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071040] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.192125] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.124687] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.295888] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +6.664222] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.065570] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.848518] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +8.732358] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 9 20:21] systemd-fstab-generator[5090]: Ignoring "noauto" option for root device
	[Oct 9 20:23] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +0.064209] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:25:45 up 8 min,  0 users,  load average: 0.12, 0.10, 0.04
	Linux old-k8s-version-169021 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000c174d0)
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]: goroutine 165 [select]:
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00058def0, 0x4f0ac20, 0xc000bdbcc0, 0x1, 0xc0001000c0)
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000254540, 0xc0001000c0)
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000c07070, 0xc000c11cc0)
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 09 20:25:42 old-k8s-version-169021 kubelet[5544]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 09 20:25:42 old-k8s-version-169021 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 09 20:25:42 old-k8s-version-169021 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 09 20:25:43 old-k8s-version-169021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 09 20:25:43 old-k8s-version-169021 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 09 20:25:43 old-k8s-version-169021 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 09 20:25:43 old-k8s-version-169021 kubelet[5611]: I1009 20:25:43.218414    5611 server.go:416] Version: v1.20.0
	Oct 09 20:25:43 old-k8s-version-169021 kubelet[5611]: I1009 20:25:43.219949    5611 server.go:837] Client rotation is on, will bootstrap in background
	Oct 09 20:25:43 old-k8s-version-169021 kubelet[5611]: I1009 20:25:43.226766    5611 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 09 20:25:43 old-k8s-version-169021 kubelet[5611]: W1009 20:25:43.228332    5611 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 09 20:25:43 old-k8s-version-169021 kubelet[5611]: I1009 20:25:43.228375    5611 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 2 (229.60658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-169021" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (721.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-503330 -n embed-certs-503330
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-09 20:31:07.591336864 +0000 UTC m=+6278.581109781
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-503330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-503330 logs -n 25: (2.090214437s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-790037                           | kubernetes-upgrade-790037    | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:07 UTC |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-615869 sudo                            | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-615869                                 | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:08 UTC |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-480205             | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-324052 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | disable-driver-mounts-324052                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:10 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503330            | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC | 09 Oct 24 20:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-733270  | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-480205                  | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169021        | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-503330                 | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-733270       | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:22 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169021             | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:13:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:13:44.614940   64287 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:13:44.615052   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615076   64287 out.go:358] Setting ErrFile to fd 2...
	I1009 20:13:44.615081   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615239   64287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:13:44.615728   64287 out.go:352] Setting JSON to false
	I1009 20:13:44.616598   64287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6966,"bootTime":1728497859,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:13:44.616678   64287 start.go:139] virtualization: kvm guest
	I1009 20:13:44.618709   64287 out.go:177] * [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:13:44.619813   64287 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:13:44.619841   64287 notify.go:220] Checking for updates...
	I1009 20:13:44.621876   64287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:13:44.623226   64287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:13:44.624576   64287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:13:44.625863   64287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:13:44.627027   64287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:13:44.628559   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:13:44.628948   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.629014   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.644138   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I1009 20:13:44.644537   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.645045   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.645067   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.645380   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.645557   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.647115   64287 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 20:13:44.648228   64287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:13:44.648491   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.648529   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.663211   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I1009 20:13:44.663674   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.664164   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.664192   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.664482   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.664648   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.697395   64287 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:13:44.698580   64287 start.go:297] selected driver: kvm2
	I1009 20:13:44.698591   64287 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.698719   64287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:13:44.699437   64287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.699521   64287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:13:44.713190   64287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:13:44.713567   64287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:13:44.713600   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:13:44.713640   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:13:44.713673   64287 start.go:340] cluster config:
	{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.713805   64287 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.716209   64287 out.go:177] * Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	I1009 20:13:44.717364   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:13:44.717399   64287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:13:44.717409   64287 cache.go:56] Caching tarball of preloaded images
	I1009 20:13:44.717485   64287 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:13:44.717495   64287 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:13:44.717594   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:13:44.717753   64287 start.go:360] acquireMachinesLock for old-k8s-version-169021: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:13:48.943307   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:52.015296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:58.095330   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:01.167322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:07.247325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:10.323296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:16.399318   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:19.471371   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:25.551279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:28.623322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:34.703301   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:37.775281   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:43.855344   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:46.927300   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:53.007389   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:56.079332   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:02.159290   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:05.231351   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:11.311339   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:14.383289   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:20.463287   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:23.535402   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:29.615312   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:32.687319   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:38.767323   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:41.839306   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:47.919325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:50.991292   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:57.071390   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:00.143404   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:06.223291   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:09.295298   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:15.375349   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:18.447271   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:24.527327   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:27.599279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:30.604005   63744 start.go:364] duration metric: took 3m52.142985964s to acquireMachinesLock for "embed-certs-503330"
	I1009 20:16:30.604068   63744 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:30.604076   63744 fix.go:54] fixHost starting: 
	I1009 20:16:30.604520   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:30.604571   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:30.620743   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I1009 20:16:30.621433   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:30.621936   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:16:30.621961   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:30.622323   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:30.622490   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:30.622654   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:16:30.624257   63744 fix.go:112] recreateIfNeeded on embed-certs-503330: state=Stopped err=<nil>
	I1009 20:16:30.624295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	W1009 20:16:30.624542   63744 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:30.627103   63744 out.go:177] * Restarting existing kvm2 VM for "embed-certs-503330" ...
	I1009 20:16:30.601719   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:30.601759   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602048   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:16:30.602078   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602263   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:16:30.603862   63427 machine.go:96] duration metric: took 4m37.428982059s to provisionDockerMachine
	I1009 20:16:30.603905   63427 fix.go:56] duration metric: took 4m37.449834405s for fixHost
	I1009 20:16:30.603915   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 4m37.449856097s
	W1009 20:16:30.603942   63427 start.go:714] error starting host: provision: host is not running
	W1009 20:16:30.604043   63427 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1009 20:16:30.604052   63427 start.go:729] Will try again in 5 seconds ...
	I1009 20:16:30.628558   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Start
	I1009 20:16:30.628718   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring networks are active...
	I1009 20:16:30.629440   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network default is active
	I1009 20:16:30.629760   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network mk-embed-certs-503330 is active
	I1009 20:16:30.630197   63744 main.go:141] libmachine: (embed-certs-503330) Getting domain xml...
	I1009 20:16:30.630952   63744 main.go:141] libmachine: (embed-certs-503330) Creating domain...
	I1009 20:16:31.808982   63744 main.go:141] libmachine: (embed-certs-503330) Waiting to get IP...
	I1009 20:16:31.809856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:31.810317   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:31.810463   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:31.810307   64895 retry.go:31] will retry after 287.246953ms: waiting for machine to come up
	I1009 20:16:32.098815   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.099474   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.099513   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.099422   64895 retry.go:31] will retry after 323.155152ms: waiting for machine to come up
	I1009 20:16:32.424145   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.424618   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.424646   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.424576   64895 retry.go:31] will retry after 410.947245ms: waiting for machine to come up
	I1009 20:16:32.837351   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.837773   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.837823   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.837735   64895 retry.go:31] will retry after 562.56411ms: waiting for machine to come up
	I1009 20:16:35.605597   63427 start.go:360] acquireMachinesLock for no-preload-480205: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:16:33.401377   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.401828   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.401877   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.401781   64895 retry.go:31] will retry after 460.104327ms: waiting for machine to come up
	I1009 20:16:33.863457   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.863854   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.863880   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.863815   64895 retry.go:31] will retry after 668.516186ms: waiting for machine to come up
	I1009 20:16:34.533619   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:34.534019   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:34.534054   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:34.533954   64895 retry.go:31] will retry after 966.757544ms: waiting for machine to come up
	I1009 20:16:35.501805   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:35.502178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:35.502200   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:35.502137   64895 retry.go:31] will retry after 1.017669155s: waiting for machine to come up
	I1009 20:16:36.521729   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:36.522150   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:36.522178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:36.522115   64895 retry.go:31] will retry after 1.292799206s: waiting for machine to come up
	I1009 20:16:37.816782   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:37.817187   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:37.817207   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:37.817156   64895 retry.go:31] will retry after 2.202935241s: waiting for machine to come up
	I1009 20:16:40.022666   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:40.023072   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:40.023101   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:40.023030   64895 retry.go:31] will retry after 2.360885318s: waiting for machine to come up
	I1009 20:16:42.385530   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:42.385947   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:42.385976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:42.385909   64895 retry.go:31] will retry after 2.1999082s: waiting for machine to come up
	I1009 20:16:44.588258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:44.588617   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:44.588649   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:44.588581   64895 retry.go:31] will retry after 3.345984614s: waiting for machine to come up
	I1009 20:16:47.937287   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937758   63744 main.go:141] libmachine: (embed-certs-503330) Found IP for machine: 192.168.50.97
	I1009 20:16:47.937785   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has current primary IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937790   63744 main.go:141] libmachine: (embed-certs-503330) Reserving static IP address...
	I1009 20:16:47.938195   63744 main.go:141] libmachine: (embed-certs-503330) Reserved static IP address: 192.168.50.97
	I1009 20:16:47.938231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.938241   63744 main.go:141] libmachine: (embed-certs-503330) Waiting for SSH to be available...
	I1009 20:16:47.938266   63744 main.go:141] libmachine: (embed-certs-503330) DBG | skip adding static IP to network mk-embed-certs-503330 - found existing host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"}
	I1009 20:16:47.938279   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Getting to WaitForSSH function...
	I1009 20:16:47.940214   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940468   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.940499   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940570   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH client type: external
	I1009 20:16:47.940605   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa (-rw-------)
	I1009 20:16:47.940639   63744 main.go:141] libmachine: (embed-certs-503330) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:16:47.940654   63744 main.go:141] libmachine: (embed-certs-503330) DBG | About to run SSH command:
	I1009 20:16:47.940660   63744 main.go:141] libmachine: (embed-certs-503330) DBG | exit 0
	I1009 20:16:48.066973   63744 main.go:141] libmachine: (embed-certs-503330) DBG | SSH cmd err, output: <nil>: 
	I1009 20:16:48.067404   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetConfigRaw
	I1009 20:16:48.068009   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.070587   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.070969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.070998   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.071241   63744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/config.json ...
	I1009 20:16:48.071426   63744 machine.go:93] provisionDockerMachine start ...
	I1009 20:16:48.071443   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:48.071655   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.074102   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.074448   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074560   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.074721   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074872   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074989   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.075156   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.075346   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.075358   63744 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:16:48.187275   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:16:48.187302   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187600   63744 buildroot.go:166] provisioning hostname "embed-certs-503330"
	I1009 20:16:48.187624   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187763   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.190220   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190585   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.190606   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190736   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.190932   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191110   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191251   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.191400   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.191608   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.191629   63744 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-503330 && echo "embed-certs-503330" | sudo tee /etc/hostname
	I1009 20:16:48.321932   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-503330
	
	I1009 20:16:48.321961   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.324976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.325393   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325542   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.325720   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.325856   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.326024   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.326360   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.326546   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.326570   63744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-503330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-503330/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-503330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:16:49.299713   64109 start.go:364] duration metric: took 3m11.699715872s to acquireMachinesLock for "default-k8s-diff-port-733270"
	I1009 20:16:49.299779   64109 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:49.299788   64109 fix.go:54] fixHost starting: 
	I1009 20:16:49.300158   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:49.300205   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:49.319769   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1009 20:16:49.320201   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:49.320678   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:16:49.320704   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:49.321107   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:49.321301   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:16:49.321463   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:16:49.322908   64109 fix.go:112] recreateIfNeeded on default-k8s-diff-port-733270: state=Stopped err=<nil>
	I1009 20:16:49.322943   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	W1009 20:16:49.323098   64109 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:49.324952   64109 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-733270" ...
	I1009 20:16:48.448176   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:48.448210   63744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:16:48.448243   63744 buildroot.go:174] setting up certificates
	I1009 20:16:48.448254   63744 provision.go:84] configureAuth start
	I1009 20:16:48.448267   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.448531   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.450984   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451384   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.451422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451479   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.453759   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454080   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.454106   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454202   63744 provision.go:143] copyHostCerts
	I1009 20:16:48.454273   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:16:48.454283   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:16:48.454362   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:16:48.454505   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:16:48.454517   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:16:48.454565   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:16:48.454650   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:16:48.454660   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:16:48.454696   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:16:48.454767   63744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.embed-certs-503330 san=[127.0.0.1 192.168.50.97 embed-certs-503330 localhost minikube]
	I1009 20:16:48.669251   63744 provision.go:177] copyRemoteCerts
	I1009 20:16:48.669335   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:16:48.669373   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.671969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.672258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.672629   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.672739   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.672856   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:48.756869   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:16:48.781853   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:16:48.805746   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:16:48.828729   63744 provision.go:87] duration metric: took 380.461988ms to configureAuth
	I1009 20:16:48.828774   63744 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:16:48.828972   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:16:48.829053   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.831590   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.831874   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.831896   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.832085   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.832273   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832411   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832545   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.832664   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.832906   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.832928   63744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:16:49.057643   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:16:49.057673   63744 machine.go:96] duration metric: took 986.233627ms to provisionDockerMachine
	I1009 20:16:49.057686   63744 start.go:293] postStartSetup for "embed-certs-503330" (driver="kvm2")
	I1009 20:16:49.057697   63744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:16:49.057713   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.057985   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:16:49.058013   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.060943   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061314   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.061336   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061544   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.061732   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.061891   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.062024   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.145757   63744 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:16:49.150378   63744 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:16:49.150407   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:16:49.150486   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:16:49.150589   63744 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:16:49.150697   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:16:49.160318   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:49.184297   63744 start.go:296] duration metric: took 126.596407ms for postStartSetup
	I1009 20:16:49.184337   63744 fix.go:56] duration metric: took 18.580262238s for fixHost
	I1009 20:16:49.184374   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.186720   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187020   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.187043   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187243   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.187435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187571   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187689   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.187812   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:49.187993   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:49.188005   63744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:16:49.299573   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505009.274901835
	
	I1009 20:16:49.299591   63744 fix.go:216] guest clock: 1728505009.274901835
	I1009 20:16:49.299610   63744 fix.go:229] Guest: 2024-10-09 20:16:49.274901835 +0000 UTC Remote: 2024-10-09 20:16:49.184353734 +0000 UTC m=+250.856887553 (delta=90.548101ms)
	I1009 20:16:49.299639   63744 fix.go:200] guest clock delta is within tolerance: 90.548101ms
	I1009 20:16:49.299644   63744 start.go:83] releasing machines lock for "embed-certs-503330", held for 18.695596427s
	I1009 20:16:49.299671   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.299949   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:49.302951   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303308   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.303337   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303494   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.303952   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304100   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304164   63744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:16:49.304213   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.304273   63744 ssh_runner.go:195] Run: cat /version.json
	I1009 20:16:49.304295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.306543   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306817   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.306856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306901   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307010   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307196   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307365   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.307387   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.307404   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307518   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.307612   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307778   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307974   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.308128   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.410624   63744 ssh_runner.go:195] Run: systemctl --version
	I1009 20:16:49.418412   63744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:16:49.567318   63744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:16:49.573238   63744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:16:49.573326   63744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:16:49.589269   63744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:16:49.589292   63744 start.go:495] detecting cgroup driver to use...
	I1009 20:16:49.589361   63744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:16:49.606654   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:16:49.621200   63744 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:16:49.621253   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:16:49.635346   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:16:49.649294   63744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:16:49.764096   63744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:16:49.892568   63744 docker.go:233] disabling docker service ...
	I1009 20:16:49.892650   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:16:49.907527   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:16:49.920395   63744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:16:50.067177   63744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:16:50.222407   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:16:50.236968   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:16:50.257005   63744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:16:50.257058   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.269955   63744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:16:50.270011   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.282633   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.296259   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.307683   63744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:16:50.320174   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.331518   63744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.350124   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.361327   63744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:16:50.371637   63744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:16:50.371707   63744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:16:50.385652   63744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:16:50.395762   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:50.521257   63744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:16:50.631377   63744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:16:50.631447   63744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:16:50.636594   63744 start.go:563] Will wait 60s for crictl version
	I1009 20:16:50.636643   63744 ssh_runner.go:195] Run: which crictl
	I1009 20:16:50.640677   63744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:16:50.693612   63744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:16:50.693695   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.724735   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.755820   63744 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:16:49.326372   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Start
	I1009 20:16:49.326507   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring networks are active...
	I1009 20:16:49.327206   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network default is active
	I1009 20:16:49.327553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network mk-default-k8s-diff-port-733270 is active
	I1009 20:16:49.327882   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Getting domain xml...
	I1009 20:16:49.328531   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Creating domain...
	I1009 20:16:50.594895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting to get IP...
	I1009 20:16:50.595715   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596086   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596183   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.596074   65019 retry.go:31] will retry after 205.766462ms: waiting for machine to come up
	I1009 20:16:50.803483   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.803974   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.804004   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.803914   65019 retry.go:31] will retry after 357.132949ms: waiting for machine to come up
	I1009 20:16:51.162582   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163122   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163163   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.163072   65019 retry.go:31] will retry after 316.280977ms: waiting for machine to come up
	I1009 20:16:51.480560   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481080   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481107   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.481029   65019 retry.go:31] will retry after 498.455228ms: waiting for machine to come up
	I1009 20:16:51.980618   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981136   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981165   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.981099   65019 retry.go:31] will retry after 595.314117ms: waiting for machine to come up
	I1009 20:16:50.757146   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:50.759889   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760334   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:50.760365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760613   63744 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 20:16:50.764810   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:50.777746   63744 kubeadm.go:883] updating cluster {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:16:50.777862   63744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:16:50.777926   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:50.816658   63744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:16:50.816722   63744 ssh_runner.go:195] Run: which lz4
	I1009 20:16:50.820880   63744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:16:50.825586   63744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:16:50.825614   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:16:52.206757   63744 crio.go:462] duration metric: took 1.385906608s to copy over tarball
	I1009 20:16:52.206837   63744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:16:52.577801   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578322   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578346   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:52.578269   65019 retry.go:31] will retry after 872.123349ms: waiting for machine to come up
	I1009 20:16:53.452602   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453038   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453068   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:53.452984   65019 retry.go:31] will retry after 727.985471ms: waiting for machine to come up
	I1009 20:16:54.182823   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:54.183181   65019 retry.go:31] will retry after 1.366580369s: waiting for machine to come up
	I1009 20:16:55.551983   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552452   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:55.552365   65019 retry.go:31] will retry after 1.327634108s: waiting for machine to come up
	I1009 20:16:56.881693   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882111   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882143   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:56.882061   65019 retry.go:31] will retry after 1.817770667s: waiting for machine to come up
	I1009 20:16:54.208830   63744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.001963207s)
	I1009 20:16:54.208858   63744 crio.go:469] duration metric: took 2.002072256s to extract the tarball
	I1009 20:16:54.208866   63744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:16:54.244727   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:54.287243   63744 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:16:54.287271   63744 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:16:54.287280   63744 kubeadm.go:934] updating node { 192.168.50.97 8443 v1.31.1 crio true true} ...
	I1009 20:16:54.287407   63744 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-503330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:16:54.287496   63744 ssh_runner.go:195] Run: crio config
	I1009 20:16:54.335950   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:16:54.335972   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:16:54.335992   63744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:16:54.336018   63744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.97 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-503330 NodeName:embed-certs-503330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:16:54.336171   63744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-503330"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:16:54.336230   63744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:16:54.346657   63744 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:16:54.346730   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:16:54.356150   63744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:16:54.372246   63744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:16:54.388168   63744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1009 20:16:54.404739   63744 ssh_runner.go:195] Run: grep 192.168.50.97	control-plane.minikube.internal$ /etc/hosts
	I1009 20:16:54.408599   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:54.421033   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:54.554324   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:54.571469   63744 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330 for IP: 192.168.50.97
	I1009 20:16:54.571493   63744 certs.go:194] generating shared ca certs ...
	I1009 20:16:54.571514   63744 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:54.571702   63744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:16:54.571755   63744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:16:54.571768   63744 certs.go:256] generating profile certs ...
	I1009 20:16:54.571890   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/client.key
	I1009 20:16:54.571977   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key.3496edbe
	I1009 20:16:54.572035   63744 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key
	I1009 20:16:54.572172   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:16:54.572212   63744 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:16:54.572225   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:16:54.572263   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:16:54.572295   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:16:54.572339   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:16:54.572395   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:54.573111   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:16:54.613670   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:16:54.647116   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:16:54.683687   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:16:54.722221   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:16:54.759929   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:16:54.787802   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:16:54.810019   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:16:54.832805   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:16:54.854772   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:16:54.878414   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:16:54.901850   63744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:16:54.918260   63744 ssh_runner.go:195] Run: openssl version
	I1009 20:16:54.923815   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:16:54.934350   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938733   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938799   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.944372   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:16:54.954950   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:16:54.965726   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970021   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970081   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.975568   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:16:54.986392   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:16:54.996852   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001051   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001096   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.006579   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:16:55.017264   63744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:16:55.021893   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:16:55.027729   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:16:55.033714   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:16:55.039641   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:16:55.045236   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:16:55.050855   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:16:55.056748   63744 kubeadm.go:392] StartCluster: {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:55.056833   63744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:16:55.056882   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.098936   63744 cri.go:89] found id: ""
	I1009 20:16:55.099014   63744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:16:55.109556   63744 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:16:55.109579   63744 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:16:55.109625   63744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:16:55.119379   63744 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:16:55.120348   63744 kubeconfig.go:125] found "embed-certs-503330" server: "https://192.168.50.97:8443"
	I1009 20:16:55.122330   63744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:16:55.131900   63744 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.97
	I1009 20:16:55.131927   63744 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:16:55.131936   63744 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:16:55.131978   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.171019   63744 cri.go:89] found id: ""
	I1009 20:16:55.171090   63744 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:16:55.188501   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:16:55.198221   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:16:55.198244   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:16:55.198304   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:16:55.207327   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:16:55.207371   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:16:55.216598   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:16:55.226558   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:16:55.226618   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:16:55.237485   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.246557   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:16:55.246604   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.257542   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:16:55.267040   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:16:55.267116   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:16:55.276472   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:16:55.285774   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:55.402155   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.327441   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.559638   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.623281   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.682538   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:16:56.682638   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.183012   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.682740   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.183107   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.702309   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702787   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702821   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:58.702713   65019 retry.go:31] will retry after 1.927245136s: waiting for machine to come up
	I1009 20:17:00.631448   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631884   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:00.631828   65019 retry.go:31] will retry after 2.288888745s: waiting for machine to come up
	I1009 20:16:58.683664   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.717388   63744 api_server.go:72] duration metric: took 2.034851204s to wait for apiserver process to appear ...
	I1009 20:16:58.717417   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:16:58.717441   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:16:58.717988   63744 api_server.go:269] stopped: https://192.168.50.97:8443/healthz: Get "https://192.168.50.97:8443/healthz": dial tcp 192.168.50.97:8443: connect: connection refused
	I1009 20:16:59.217777   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.473119   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.473153   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.473179   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.549848   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.549880   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.718137   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.722540   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:01.722571   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.217856   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.222606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:02.222638   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.718198   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.723729   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:17:02.729552   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:02.729582   63744 api_server.go:131] duration metric: took 4.01215752s to wait for apiserver health ...
	I1009 20:17:02.729594   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:17:02.729603   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:02.731426   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:02.732669   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:02.743408   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:02.762443   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:02.774604   63744 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:02.774647   63744 system_pods.go:61] "coredns-7c65d6cfc9-df57g" [6d86b5f4-6ab2-4313-9247-f2766bb2cd17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:02.774666   63744 system_pods.go:61] "etcd-embed-certs-503330" [c3d2f07e-3ea7-41ae-9247-0c79e5aeef7f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:02.774685   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [917f81d6-e4fb-41fe-8051-a1c645e35af8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:02.774693   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [d12d9ad5-e80a-4745-ae2d-3f24965de4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:02.774706   63744 system_pods.go:61] "kube-proxy-dsh65" [f027d12a-f0b8-45a9-a73d-1afdd80ef7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:17:02.774718   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [a42cdb71-099c-40a3-b474-ced8659ae391] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:02.774736   63744 system_pods.go:61] "metrics-server-6867b74b74-6z7jj" [58aa0ad3-3210-4722-a579-392688c91bae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:02.774752   63744 system_pods.go:61] "storage-provisioner" [3b0ab765-5bd6-44ac-866e-1c1168ad8ed9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:02.774765   63744 system_pods.go:74] duration metric: took 12.298201ms to wait for pod list to return data ...
	I1009 20:17:02.774777   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:02.785857   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:02.785882   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:02.785892   63744 node_conditions.go:105] duration metric: took 11.107216ms to run NodePressure ...
	I1009 20:17:02.785910   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:03.147197   63744 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150727   63744 kubeadm.go:739] kubelet initialised
	I1009 20:17:03.150746   63744 kubeadm.go:740] duration metric: took 3.5247ms waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150753   63744 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:03.155171   63744 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.160022   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160045   63744 pod_ready.go:82] duration metric: took 4.856483ms for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.160053   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160059   63744 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.165155   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165176   63744 pod_ready.go:82] duration metric: took 5.104415ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.165184   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165190   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.170669   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170684   63744 pod_ready.go:82] duration metric: took 5.48497ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.170691   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170697   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.175025   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175039   63744 pod_ready.go:82] duration metric: took 4.333372ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.175047   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175052   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:02.923370   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923752   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923780   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:02.923727   65019 retry.go:31] will retry after 2.87724378s: waiting for machine to come up
	I1009 20:17:05.803251   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803748   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803774   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:05.803698   65019 retry.go:31] will retry after 5.592307609s: waiting for machine to come up
	I1009 20:17:03.565676   63744 pod_ready.go:93] pod "kube-proxy-dsh65" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:03.565703   63744 pod_ready.go:82] duration metric: took 390.643175ms for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.565715   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:05.574374   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:08.072406   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:11.397365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397813   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Found IP for machine: 192.168.72.134
	I1009 20:17:11.397834   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has current primary IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397840   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserving static IP address...
	I1009 20:17:11.398220   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.398246   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | skip adding static IP to network mk-default-k8s-diff-port-733270 - found existing host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"}
	I1009 20:17:11.398259   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserved static IP address: 192.168.72.134
	I1009 20:17:11.398274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for SSH to be available...
	I1009 20:17:11.398291   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Getting to WaitForSSH function...
	I1009 20:17:11.400217   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400530   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.400553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400649   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH client type: external
	I1009 20:17:11.400675   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa (-rw-------)
	I1009 20:17:11.400710   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:11.400729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | About to run SSH command:
	I1009 20:17:11.400744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | exit 0
	I1009 20:17:11.526822   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:11.527202   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetConfigRaw
	I1009 20:17:11.527838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.530365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530702   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.530729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530978   64109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/config.json ...
	I1009 20:17:11.531187   64109 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:11.531204   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:11.531388   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.533307   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533646   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.533671   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533778   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.533949   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534088   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534181   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.534308   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.534521   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.534535   64109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:11.643315   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:11.643341   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643558   64109 buildroot.go:166] provisioning hostname "default-k8s-diff-port-733270"
	I1009 20:17:11.643580   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643746   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.646369   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646741   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.646771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646919   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.647087   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647249   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647363   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.647495   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.647698   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.647723   64109 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733270 && echo "default-k8s-diff-port-733270" | sudo tee /etc/hostname
	I1009 20:17:11.774094   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733270
	
	I1009 20:17:11.774129   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.776945   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.777318   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777450   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.777637   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777807   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777942   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.778077   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.778265   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.778290   64109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:11.899636   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:11.899666   64109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:11.899712   64109 buildroot.go:174] setting up certificates
	I1009 20:17:11.899729   64109 provision.go:84] configureAuth start
	I1009 20:17:11.899745   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.900007   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.902313   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902620   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.902647   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902783   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.904665   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.904999   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.905028   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.905121   64109 provision.go:143] copyHostCerts
	I1009 20:17:11.905194   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:11.905208   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:11.905274   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:11.905389   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:11.905403   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:11.905433   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:11.905506   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:11.905515   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:11.905543   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:11.905658   64109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733270 san=[127.0.0.1 192.168.72.134 default-k8s-diff-port-733270 localhost minikube]
	I1009 20:17:12.089469   64109 provision.go:177] copyRemoteCerts
	I1009 20:17:12.089537   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:12.089563   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.091929   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092210   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.092242   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092431   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.092601   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.092729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.092822   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.177787   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 20:17:12.201400   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:17:12.225416   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:12.247777   64109 provision.go:87] duration metric: took 348.034794ms to configureAuth
	I1009 20:17:12.247801   64109 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:12.247989   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:12.248077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.250489   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.250849   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.250880   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.251083   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.251281   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251515   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.251786   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.251973   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.251995   64109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:12.475656   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:12.475687   64109 machine.go:96] duration metric: took 944.487945ms to provisionDockerMachine
	I1009 20:17:12.475701   64109 start.go:293] postStartSetup for "default-k8s-diff-port-733270" (driver="kvm2")
	I1009 20:17:12.475714   64109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:12.475730   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.476033   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:12.476070   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.478464   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478809   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.478838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.479077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.479198   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.479330   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.719812   64287 start.go:364] duration metric: took 3m28.002029987s to acquireMachinesLock for "old-k8s-version-169021"
	I1009 20:17:12.719868   64287 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:12.719874   64287 fix.go:54] fixHost starting: 
	I1009 20:17:12.720288   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:12.720338   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:12.736888   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I1009 20:17:12.737330   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:12.737796   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:17:12.737818   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:12.738095   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:12.738284   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:12.738407   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetState
	I1009 20:17:12.740019   64287 fix.go:112] recreateIfNeeded on old-k8s-version-169021: state=Stopped err=<nil>
	I1009 20:17:12.740056   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	W1009 20:17:12.740218   64287 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:12.741971   64287 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-169021" ...
	I1009 20:17:10.572038   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:13.072273   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:12.566216   64109 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:12.570733   64109 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:12.570754   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:12.570811   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:12.570894   64109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:12.571002   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:12.580485   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:12.604494   64109 start.go:296] duration metric: took 128.779636ms for postStartSetup
	I1009 20:17:12.604528   64109 fix.go:56] duration metric: took 23.304740697s for fixHost
	I1009 20:17:12.604547   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.607253   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607579   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.607611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607762   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.607941   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608085   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608190   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.608315   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.608524   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.608542   64109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:12.719641   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505032.674262019
	
	I1009 20:17:12.719663   64109 fix.go:216] guest clock: 1728505032.674262019
	I1009 20:17:12.719672   64109 fix.go:229] Guest: 2024-10-09 20:17:12.674262019 +0000 UTC Remote: 2024-10-09 20:17:12.604532015 +0000 UTC m=+215.141542026 (delta=69.730004ms)
	I1009 20:17:12.719734   64109 fix.go:200] guest clock delta is within tolerance: 69.730004ms
	I1009 20:17:12.719742   64109 start.go:83] releasing machines lock for "default-k8s-diff-port-733270", held for 23.419984544s
	I1009 20:17:12.719771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.720009   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:12.722908   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.723308   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723449   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724041   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724196   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724276   64109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:12.724314   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.724356   64109 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:12.724376   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.726747   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727051   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727098   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727176   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727264   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727555   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.727586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727622   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727681   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.727738   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727865   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727993   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.728110   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.808408   64109 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:12.835630   64109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:12.989949   64109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:12.995824   64109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:12.995893   64109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:13.011680   64109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:13.011707   64109 start.go:495] detecting cgroup driver to use...
	I1009 20:17:13.011774   64109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:13.027110   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:13.040097   64109 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:13.040198   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:13.054001   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:13.068380   64109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:13.190626   64109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:13.367857   64109 docker.go:233] disabling docker service ...
	I1009 20:17:13.367921   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:13.385929   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:13.403253   64109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:13.528117   64109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:13.663611   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:13.679242   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:13.699707   64109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:13.699775   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.710685   64109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:13.710749   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.722116   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.732987   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.744601   64109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:13.755998   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.768759   64109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.788295   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.798784   64109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:13.808745   64109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:13.808810   64109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:13.823798   64109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:13.834854   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:13.959977   64109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:14.071531   64109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:14.071613   64109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:14.077348   64109 start.go:563] Will wait 60s for crictl version
	I1009 20:17:14.077412   64109 ssh_runner.go:195] Run: which crictl
	I1009 20:17:14.081272   64109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:14.120851   64109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:14.120951   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.148588   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.178661   64109 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:12.743057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .Start
	I1009 20:17:12.743249   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring networks are active...
	I1009 20:17:12.743940   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network default is active
	I1009 20:17:12.744263   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network mk-old-k8s-version-169021 is active
	I1009 20:17:12.744639   64287 main.go:141] libmachine: (old-k8s-version-169021) Getting domain xml...
	I1009 20:17:12.745331   64287 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:17:14.013679   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting to get IP...
	I1009 20:17:14.014647   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.015019   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.015101   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.015007   65185 retry.go:31] will retry after 236.047931ms: waiting for machine to come up
	I1009 20:17:14.252239   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.252610   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.252636   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.252568   65185 retry.go:31] will retry after 325.864911ms: waiting for machine to come up
	I1009 20:17:14.580315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.580940   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.580965   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.580878   65185 retry.go:31] will retry after 366.421043ms: waiting for machine to come up
	I1009 20:17:14.179897   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:14.183174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183497   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:14.183529   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183702   64109 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:14.187948   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:14.201218   64109 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:14.201341   64109 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:14.201381   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:14.237137   64109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:14.237210   64109 ssh_runner.go:195] Run: which lz4
	I1009 20:17:14.241492   64109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:14.246237   64109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:14.246270   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:17:15.633127   64109 crio.go:462] duration metric: took 1.391666515s to copy over tarball
	I1009 20:17:15.633221   64109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:15.073427   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.085878   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.574480   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:17.574502   63744 pod_ready.go:82] duration metric: took 14.00878017s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:17.574511   63744 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:14.949258   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.949766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.949800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.949726   65185 retry.go:31] will retry after 498.276481ms: waiting for machine to come up
	I1009 20:17:15.450160   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:15.450601   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:15.450635   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:15.450548   65185 retry.go:31] will retry after 742.118922ms: waiting for machine to come up
	I1009 20:17:16.194707   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.195193   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.195232   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.195137   65185 retry.go:31] will retry after 583.713263ms: waiting for machine to come up
	I1009 20:17:16.780844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.781277   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.781302   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.781215   65185 retry.go:31] will retry after 936.435146ms: waiting for machine to come up
	I1009 20:17:17.719083   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:17.719558   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:17.719588   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:17.719503   65185 retry.go:31] will retry after 1.046822117s: waiting for machine to come up
	I1009 20:17:18.768306   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:18.768844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:18.768872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:18.768798   65185 retry.go:31] will retry after 1.362599959s: waiting for machine to come up
	I1009 20:17:17.738682   64109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10542583s)
	I1009 20:17:17.738724   64109 crio.go:469] duration metric: took 2.105568099s to extract the tarball
	I1009 20:17:17.738733   64109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:17.779611   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:17.834267   64109 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:17:17.834291   64109 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:17:17.834299   64109 kubeadm.go:934] updating node { 192.168.72.134 8444 v1.31.1 crio true true} ...
	I1009 20:17:17.834384   64109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-733270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:17.834449   64109 ssh_runner.go:195] Run: crio config
	I1009 20:17:17.879236   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:17.879265   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:17.879286   64109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:17.879306   64109 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733270 NodeName:default-k8s-diff-port-733270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:17:17.879467   64109 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:17.879531   64109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:17:17.889847   64109 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:17.889945   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:17.899292   64109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1009 20:17:17.915656   64109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:17.931802   64109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1009 20:17:17.949046   64109 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:17.953042   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:17.966741   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:18.099697   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:18.120535   64109 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270 for IP: 192.168.72.134
	I1009 20:17:18.120555   64109 certs.go:194] generating shared ca certs ...
	I1009 20:17:18.120570   64109 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:18.120700   64109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:18.120734   64109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:18.120743   64109 certs.go:256] generating profile certs ...
	I1009 20:17:18.120813   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.key
	I1009 20:17:18.120867   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key.a935be89
	I1009 20:17:18.120910   64109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key
	I1009 20:17:18.121023   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:18.121053   64109 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:18.121065   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:18.121107   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:18.121131   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:18.121165   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:18.121217   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:18.121886   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:18.185147   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:18.221038   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:18.252242   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:18.295828   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 20:17:18.323898   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:18.348575   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:18.372580   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:18.396351   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:18.420726   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:18.444717   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:18.469594   64109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:18.485908   64109 ssh_runner.go:195] Run: openssl version
	I1009 20:17:18.492283   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:18.503167   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507900   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507952   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.513847   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:18.524101   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:18.534793   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539332   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539410   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.545077   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:18.555669   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:18.570727   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576515   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576585   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.582738   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:18.593855   64109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:18.598553   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:18.604755   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:18.611554   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:18.617835   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:18.623671   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:18.629288   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:18.634887   64109 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:18.634994   64109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:18.635040   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.676211   64109 cri.go:89] found id: ""
	I1009 20:17:18.676309   64109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:18.686685   64109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:18.686706   64109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:18.686758   64109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:18.696573   64109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:18.697474   64109 kubeconfig.go:125] found "default-k8s-diff-port-733270" server: "https://192.168.72.134:8444"
	I1009 20:17:18.699424   64109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:18.708661   64109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.134
	I1009 20:17:18.708693   64109 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:18.708705   64109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:18.708756   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.747781   64109 cri.go:89] found id: ""
	I1009 20:17:18.747852   64109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:18.765293   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:18.776296   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:18.776315   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:18.776363   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:17:18.785075   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:18.785132   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:18.794089   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:17:18.802663   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:18.802710   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:18.811834   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.820562   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:18.820611   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.829603   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:17:18.838162   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:18.838214   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:18.847131   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:18.856597   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:18.963398   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.093311   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.129878409s)
	I1009 20:17:20.093347   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.311144   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.405808   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.500323   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:20.500417   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.001420   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.501473   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.000842   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:19.581480   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:22.081200   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:20.133416   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:20.133841   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:20.133872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:20.133789   65185 retry.go:31] will retry after 1.900366713s: waiting for machine to come up
	I1009 20:17:22.036076   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:22.036465   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:22.036499   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:22.036421   65185 retry.go:31] will retry after 2.419471311s: waiting for machine to come up
	I1009 20:17:24.458015   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:24.458410   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:24.458441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:24.458379   65185 retry.go:31] will retry after 2.284501028s: waiting for machine to come up
	I1009 20:17:22.500576   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.517320   64109 api_server.go:72] duration metric: took 2.016990608s to wait for apiserver process to appear ...
	I1009 20:17:22.517349   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:17:22.517371   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.392466   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.392500   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.392516   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.432214   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.432243   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.518413   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.537284   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:25.537328   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.017494   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.022548   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.022581   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.518206   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.523173   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.523198   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:27.017735   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:27.022557   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:17:27.031462   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:27.031486   64109 api_server.go:131] duration metric: took 4.514131072s to wait for apiserver health ...
	I1009 20:17:27.031494   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:27.031500   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:27.033659   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:27.035055   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:27.045141   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:27.062887   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:27.070777   64109 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:27.070810   64109 system_pods.go:61] "coredns-7c65d6cfc9-vz7nx" [c9474b15-ac87-4b81-a239-6f4f3563c708] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:27.070820   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [ef686f1a-21a5-4058-b8ca-6e719415d778] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:27.070833   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [60a13042-6ddb-41c9-993b-a351aad64ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:27.070842   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [d876ca14-7014-4891-965a-83cadccc4416] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:27.070848   64109 system_pods.go:61] "kube-proxy-zr4bl" [4545b380-2d43-415a-97aa-c245a19d8aff] Running
	I1009 20:17:27.070859   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [d2ff89d7-03cf-430c-aa64-278d800d7fa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:27.070870   64109 system_pods.go:61] "metrics-server-6867b74b74-8p24l" [133ac2dc-236a-4ad6-886a-33b132ff5b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:27.070890   64109 system_pods.go:61] "storage-provisioner" [b82a4bd2-62d3-4eee-b17c-c0ae22b2bd3b] Running
	I1009 20:17:27.070902   64109 system_pods.go:74] duration metric: took 7.993626ms to wait for pod list to return data ...
	I1009 20:17:27.070914   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:27.074265   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:27.074290   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:27.074301   64109 node_conditions.go:105] duration metric: took 3.379591ms to run NodePressure ...
	I1009 20:17:27.074327   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:27.337687   64109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342418   64109 kubeadm.go:739] kubelet initialised
	I1009 20:17:27.342438   64109 kubeadm.go:740] duration metric: took 4.72219ms waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342446   64109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:27.347265   64109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.351569   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351587   64109 pod_ready.go:82] duration metric: took 4.298933ms for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.351595   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351600   64109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.355636   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355657   64109 pod_ready.go:82] duration metric: took 4.050576ms for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.355666   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355672   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.359739   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359758   64109 pod_ready.go:82] duration metric: took 4.080099ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.359767   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359773   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.466469   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466514   64109 pod_ready.go:82] duration metric: took 106.729243ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.466530   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466546   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:24.081959   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.581477   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.744084   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:26.744443   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:26.744468   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:26.744421   65185 retry.go:31] will retry after 2.772640247s: waiting for machine to come up
	I1009 20:17:29.519542   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:29.519877   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:29.519897   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:29.519854   65185 retry.go:31] will retry after 5.534511505s: waiting for machine to come up
	I1009 20:17:27.866362   64109 pod_ready.go:93] pod "kube-proxy-zr4bl" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:27.866389   64109 pod_ready.go:82] duration metric: took 399.82454ms for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.866401   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:29.872414   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.872979   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:29.081836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.580784   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.520055   63427 start.go:364] duration metric: took 1m0.914393022s to acquireMachinesLock for "no-preload-480205"
	I1009 20:17:36.520112   63427 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:36.520120   63427 fix.go:54] fixHost starting: 
	I1009 20:17:36.520550   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:36.520590   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:36.541113   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1009 20:17:36.541505   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:36.542133   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:17:36.542161   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:36.542522   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:36.542701   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:36.542849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:17:36.544749   63427 fix.go:112] recreateIfNeeded on no-preload-480205: state=Stopped err=<nil>
	I1009 20:17:36.544774   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	W1009 20:17:36.544962   63427 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:36.546948   63427 out.go:177] * Restarting existing kvm2 VM for "no-preload-480205" ...
	I1009 20:17:34.373083   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.373497   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:35.056703   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057338   64287 main.go:141] libmachine: (old-k8s-version-169021) Found IP for machine: 192.168.61.119
	I1009 20:17:35.057370   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has current primary IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057378   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserving static IP address...
	I1009 20:17:35.057996   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.058019   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserved static IP address: 192.168.61.119
	I1009 20:17:35.058036   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | skip adding static IP to network mk-old-k8s-version-169021 - found existing host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"}
	I1009 20:17:35.058052   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Getting to WaitForSSH function...
	I1009 20:17:35.058069   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting for SSH to be available...
	I1009 20:17:35.060324   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060560   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.060586   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060678   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH client type: external
	I1009 20:17:35.060702   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa (-rw-------)
	I1009 20:17:35.060735   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:35.060750   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | About to run SSH command:
	I1009 20:17:35.060766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | exit 0
	I1009 20:17:35.183369   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:35.183732   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:17:35.184294   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.186404   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186691   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.186728   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186912   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:17:35.187139   64287 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:35.187158   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:35.187361   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.189504   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189784   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.189814   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189904   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.190057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190169   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190309   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.190422   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.190610   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.190626   64287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:35.295510   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:35.295543   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295782   64287 buildroot.go:166] provisioning hostname "old-k8s-version-169021"
	I1009 20:17:35.295804   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295994   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.298548   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.298930   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.298964   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.299120   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.299266   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299418   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299547   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.299737   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.299899   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.299912   64287 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169021 && echo "old-k8s-version-169021" | sudo tee /etc/hostname
	I1009 20:17:35.426217   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169021
	
	I1009 20:17:35.426246   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.428993   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.429348   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429554   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.429728   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.429885   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.430012   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.430164   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.430365   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.430391   64287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:35.544070   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:35.544098   64287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:35.544136   64287 buildroot.go:174] setting up certificates
	I1009 20:17:35.544146   64287 provision.go:84] configureAuth start
	I1009 20:17:35.544155   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.544420   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.547109   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547419   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.547451   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547618   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.549441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549724   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.549757   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549894   64287 provision.go:143] copyHostCerts
	I1009 20:17:35.549945   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:35.549955   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:35.550007   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:35.550109   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:35.550119   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:35.550139   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:35.550201   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:35.550207   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:35.550224   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:35.550274   64287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169021 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-169021]
	I1009 20:17:35.892413   64287 provision.go:177] copyRemoteCerts
	I1009 20:17:35.892470   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:35.892492   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.894921   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895231   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.895262   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895409   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.895585   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.895750   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.895870   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:35.978537   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:36.003667   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:17:36.029724   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:36.053321   64287 provision.go:87] duration metric: took 509.163583ms to configureAuth
	I1009 20:17:36.053347   64287 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:36.053517   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:17:36.053589   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.056411   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.056740   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.056769   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.057023   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.057214   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057396   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057533   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.057684   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.057847   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.057862   64287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:36.281284   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:36.281316   64287 machine.go:96] duration metric: took 1.094164441s to provisionDockerMachine
	I1009 20:17:36.281327   64287 start.go:293] postStartSetup for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:17:36.281339   64287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:36.281386   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.281686   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:36.281711   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.284445   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.284825   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284990   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.285132   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.285255   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.285405   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.370146   64287 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:36.374951   64287 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:36.374972   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:36.375040   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:36.375158   64287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:36.375286   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:36.384857   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:36.407811   64287 start.go:296] duration metric: took 126.472907ms for postStartSetup
	I1009 20:17:36.407852   64287 fix.go:56] duration metric: took 23.68797707s for fixHost
	I1009 20:17:36.407875   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.410584   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.410949   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.410979   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.411118   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.411292   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411461   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411593   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.411768   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.411943   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.411966   64287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:36.519849   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505056.472929841
	
	I1009 20:17:36.519877   64287 fix.go:216] guest clock: 1728505056.472929841
	I1009 20:17:36.519887   64287 fix.go:229] Guest: 2024-10-09 20:17:36.472929841 +0000 UTC Remote: 2024-10-09 20:17:36.407856716 +0000 UTC m=+231.827419064 (delta=65.073125ms)
	I1009 20:17:36.519944   64287 fix.go:200] guest clock delta is within tolerance: 65.073125ms
	I1009 20:17:36.519956   64287 start.go:83] releasing machines lock for "old-k8s-version-169021", held for 23.800110205s
	I1009 20:17:36.520000   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.520321   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:36.523296   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523653   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.523701   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523890   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524453   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524658   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524781   64287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:36.524822   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.524855   64287 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:36.524883   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.527948   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528030   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528336   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528362   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528389   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528414   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528670   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528681   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528874   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.528880   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.529031   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529035   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529170   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.529191   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.634262   64287 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:36.640126   64287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:36.794481   64287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:36.801536   64287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:36.801615   64287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:36.825211   64287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:36.825237   64287 start.go:495] detecting cgroup driver to use...
	I1009 20:17:36.825299   64287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:36.842016   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:36.861052   64287 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:36.861112   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:36.878185   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:36.892044   64287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:37.010989   64287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:37.181313   64287 docker.go:233] disabling docker service ...
	I1009 20:17:37.181373   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:37.201726   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:37.218403   64287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:37.330869   64287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:37.458670   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:37.474832   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:37.496062   64287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:17:37.496111   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.509926   64287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:37.509984   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.527671   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.543857   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.554871   64287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:37.566057   64287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:37.578675   64287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:37.578757   64287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:37.593475   64287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:37.608210   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:37.756273   64287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:37.857693   64287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:37.857759   64287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:37.863522   64287 start.go:563] Will wait 60s for crictl version
	I1009 20:17:37.863561   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:37.868216   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:37.908445   64287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:37.908519   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.939400   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.971447   64287 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:17:36.548231   63427 main.go:141] libmachine: (no-preload-480205) Calling .Start
	I1009 20:17:36.548387   63427 main.go:141] libmachine: (no-preload-480205) Ensuring networks are active...
	I1009 20:17:36.549099   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network default is active
	I1009 20:17:36.549384   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network mk-no-preload-480205 is active
	I1009 20:17:36.549760   63427 main.go:141] libmachine: (no-preload-480205) Getting domain xml...
	I1009 20:17:36.550533   63427 main.go:141] libmachine: (no-preload-480205) Creating domain...
	I1009 20:17:37.839932   63427 main.go:141] libmachine: (no-preload-480205) Waiting to get IP...
	I1009 20:17:37.840843   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:37.841295   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:37.841405   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:37.841286   65353 retry.go:31] will retry after 306.803832ms: waiting for machine to come up
	I1009 20:17:33.581531   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.080661   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:38.083154   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:37.972687   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:37.975928   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976352   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:37.976382   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976637   64287 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:37.980809   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:37.993206   64287 kubeadm.go:883] updating cluster {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:37.993359   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:17:37.993402   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:38.043755   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:38.043813   64287 ssh_runner.go:195] Run: which lz4
	I1009 20:17:38.048189   64287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:38.052553   64287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:38.052584   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:17:38.374526   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.376238   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.874242   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:40.874269   64109 pod_ready.go:82] duration metric: took 13.007861108s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:40.874282   64109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:38.149878   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.150291   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.150317   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.150240   65353 retry.go:31] will retry after 331.657929ms: waiting for machine to come up
	I1009 20:17:38.483773   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.484236   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.484259   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.484184   65353 retry.go:31] will retry after 320.466882ms: waiting for machine to come up
	I1009 20:17:38.806862   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.807342   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.807370   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.807304   65353 retry.go:31] will retry after 515.558491ms: waiting for machine to come up
	I1009 20:17:39.324105   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:39.324656   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:39.324687   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:39.324624   65353 retry.go:31] will retry after 742.624052ms: waiting for machine to come up
	I1009 20:17:40.068871   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.069333   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.069361   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.069242   65353 retry.go:31] will retry after 627.591329ms: waiting for machine to come up
	I1009 20:17:40.698046   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.698539   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.698590   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.698482   65353 retry.go:31] will retry after 1.099340902s: waiting for machine to come up
	I1009 20:17:41.799879   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:41.800304   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:41.800334   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:41.800260   65353 retry.go:31] will retry after 954.068599ms: waiting for machine to come up
	I1009 20:17:42.756258   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:42.756730   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:42.756756   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:42.756692   65353 retry.go:31] will retry after 1.483165135s: waiting for machine to come up
	I1009 20:17:40.581834   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:42.583105   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:39.710338   64287 crio.go:462] duration metric: took 1.662187364s to copy over tarball
	I1009 20:17:39.710411   64287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:42.694067   64287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.983621241s)
	I1009 20:17:42.694097   64287 crio.go:469] duration metric: took 2.98372831s to extract the tarball
	I1009 20:17:42.694106   64287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:42.739749   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:42.782349   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:42.782374   64287 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:42.782447   64287 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.782474   64287 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.782512   64287 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.782544   64287 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:17:42.782549   64287 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.782732   64287 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.782486   64287 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.782788   64287 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.784992   64287 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.785024   64287 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.784995   64287 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.785000   64287 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.785007   64287 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.785070   64287 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:17:42.785030   64287 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.785471   64287 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.936283   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.937808   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.960488   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.971814   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:17:42.977796   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.004153   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.014701   64287 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:17:43.014748   64287 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.014795   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.025133   64287 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:17:43.025170   64287 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.025204   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086484   64287 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:17:43.086512   64287 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:17:43.086532   64287 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.086541   64287 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:17:43.086579   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086581   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.097814   64287 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:17:43.097859   64287 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.097909   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103497   64287 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:17:43.103532   64287 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.103548   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.103569   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103677   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.103745   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.103799   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.105640   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.203854   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.220635   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.220670   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.220793   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.232794   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.232901   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.232905   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.389992   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.390038   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.389991   64287 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:17:43.390081   64287 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.390097   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.390112   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.390166   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.390187   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.390247   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.475244   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:17:43.536485   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:17:43.536569   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.538738   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:17:43.538812   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:17:43.538863   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:17:43.538880   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.597357   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:17:43.597449   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.630702   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.668841   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:17:44.007657   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:44.151174   64287 cache_images.go:92] duration metric: took 1.368780539s to LoadCachedImages
	W1009 20:17:44.151263   64287 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1009 20:17:44.151285   64287 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1009 20:17:44.151432   64287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-169021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:44.151500   64287 ssh_runner.go:195] Run: crio config
	I1009 20:17:44.208126   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:17:44.208148   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:44.208165   64287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:44.208183   64287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169021 NodeName:old-k8s-version-169021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:17:44.208361   64287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-169021"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:44.208437   64287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:17:44.218743   64287 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:44.218813   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:44.228160   64287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 20:17:44.245304   64287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:44.262787   64287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:17:44.280742   64287 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:44.285502   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:44.299434   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:44.427216   64287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:44.445239   64287 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021 for IP: 192.168.61.119
	I1009 20:17:44.445262   64287 certs.go:194] generating shared ca certs ...
	I1009 20:17:44.445282   64287 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:44.445454   64287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:44.445516   64287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:44.445538   64287 certs.go:256] generating profile certs ...
	I1009 20:17:44.445663   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key
	I1009 20:17:44.445728   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192
	I1009 20:17:44.445780   64287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key
	I1009 20:17:44.445920   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:44.445961   64287 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:44.445976   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:44.446008   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:44.446041   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:44.446074   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:44.446130   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:44.446993   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:44.498205   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:44.525945   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:44.572216   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:44.614281   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:17:42.881058   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:45.654206   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.242356   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:44.242846   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:44.242873   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:44.242792   65353 retry.go:31] will retry after 1.589482004s: waiting for machine to come up
	I1009 20:17:45.834679   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:45.835135   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:45.835176   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:45.835093   65353 retry.go:31] will retry after 1.757206304s: waiting for machine to come up
	I1009 20:17:47.593468   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:47.593954   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:47.593987   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:47.593889   65353 retry.go:31] will retry after 2.938063418s: waiting for machine to come up
	I1009 20:17:45.082377   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:47.581271   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.661644   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:44.695246   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:44.719043   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:44.743825   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:44.768013   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:44.793698   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:44.819945   64287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:44.840340   64287 ssh_runner.go:195] Run: openssl version
	I1009 20:17:44.847883   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:44.858853   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863657   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863707   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.871190   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:44.885414   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:44.900030   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904894   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904958   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.912406   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:44.925128   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:44.936358   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940937   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940995   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.946995   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:44.958154   64287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:44.962846   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:44.968749   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:44.974659   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:44.980867   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:44.986827   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:44.992741   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:44.998932   64287 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:44.999030   64287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:44.999107   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.037766   64287 cri.go:89] found id: ""
	I1009 20:17:45.037847   64287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:45.050640   64287 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:45.050661   64287 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:45.050717   64287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:45.061420   64287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:45.062835   64287 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:17:45.063886   64287 kubeconfig.go:62] /home/jenkins/minikube-integration/19780-9412/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169021" cluster setting kubeconfig missing "old-k8s-version-169021" context setting]
	I1009 20:17:45.065224   64287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:45.137319   64287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:45.149285   64287 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1009 20:17:45.149318   64287 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:45.149331   64287 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:45.149386   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.191415   64287 cri.go:89] found id: ""
	I1009 20:17:45.191494   64287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:45.208982   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:45.219143   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:45.219166   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:45.219219   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:17:45.229113   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:45.229199   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:45.239745   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:17:45.249766   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:45.249844   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:45.260185   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.271441   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:45.271500   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.281343   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:17:45.291026   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:45.291094   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:45.301052   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:45.311369   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:45.520151   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.097892   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.359594   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.466328   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.574255   64287 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:46.574365   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.574634   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.074595   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.575187   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.074428   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.880869   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:49.881585   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.381306   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.535997   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:50.536376   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:50.536400   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:50.536340   65353 retry.go:31] will retry after 3.744305095s: waiting for machine to come up
	I1009 20:17:49.581868   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.080469   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:50.575160   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.075457   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.574838   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.075036   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.075071   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.575204   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.074552   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.574415   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.284206   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.284770   63427 main.go:141] libmachine: (no-preload-480205) Found IP for machine: 192.168.39.162
	I1009 20:17:54.284795   63427 main.go:141] libmachine: (no-preload-480205) Reserving static IP address...
	I1009 20:17:54.284809   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has current primary IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.285276   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.285315   63427 main.go:141] libmachine: (no-preload-480205) DBG | skip adding static IP to network mk-no-preload-480205 - found existing host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"}
	I1009 20:17:54.285330   63427 main.go:141] libmachine: (no-preload-480205) Reserved static IP address: 192.168.39.162
	I1009 20:17:54.285344   63427 main.go:141] libmachine: (no-preload-480205) Waiting for SSH to be available...
	I1009 20:17:54.285356   63427 main.go:141] libmachine: (no-preload-480205) DBG | Getting to WaitForSSH function...
	I1009 20:17:54.287561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287809   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.287838   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287920   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH client type: external
	I1009 20:17:54.287947   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa (-rw-------)
	I1009 20:17:54.287988   63427 main.go:141] libmachine: (no-preload-480205) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:54.288001   63427 main.go:141] libmachine: (no-preload-480205) DBG | About to run SSH command:
	I1009 20:17:54.288014   63427 main.go:141] libmachine: (no-preload-480205) DBG | exit 0
	I1009 20:17:54.414835   63427 main.go:141] libmachine: (no-preload-480205) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:54.415251   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetConfigRaw
	I1009 20:17:54.415965   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.418617   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.418968   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.418992   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.419252   63427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/config.json ...
	I1009 20:17:54.419452   63427 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:54.419470   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:54.419664   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.421796   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422088   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.422120   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422233   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.422406   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422550   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422839   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.423013   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.423242   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.423254   63427 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:54.531462   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:54.531497   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531718   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:17:54.531744   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531956   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.534433   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534788   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.534816   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.535138   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535286   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535418   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.535601   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.535774   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.535785   63427 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-480205 && echo "no-preload-480205" | sudo tee /etc/hostname
	I1009 20:17:54.659155   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-480205
	
	I1009 20:17:54.659228   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.661958   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662288   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.662313   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662511   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.662681   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662842   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662987   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.663179   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.663354   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.663370   63427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-480205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-480205/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-480205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:54.779856   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:54.779881   63427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:54.779916   63427 buildroot.go:174] setting up certificates
	I1009 20:17:54.779926   63427 provision.go:84] configureAuth start
	I1009 20:17:54.779935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.780180   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.782673   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783013   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.783045   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783171   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.785450   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785780   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.785807   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785945   63427 provision.go:143] copyHostCerts
	I1009 20:17:54.786024   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:54.786041   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:54.786107   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:54.786282   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:54.786294   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:54.786327   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:54.786402   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:54.786412   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:54.786439   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:54.786503   63427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.no-preload-480205 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-480205]
	I1009 20:17:54.929212   63427 provision.go:177] copyRemoteCerts
	I1009 20:17:54.929265   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:54.929292   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.931970   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932355   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.932402   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932506   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.932693   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.932849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.932979   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.017690   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:55.042746   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:17:55.066760   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:55.094790   63427 provision.go:87] duration metric: took 314.853512ms to configureAuth
	I1009 20:17:55.094830   63427 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:55.095022   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:55.095125   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.097730   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098041   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.098078   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098257   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.098452   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098647   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098764   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.098926   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.099111   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.099129   63427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:55.325505   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:55.325552   63427 machine.go:96] duration metric: took 906.085773ms to provisionDockerMachine
	I1009 20:17:55.325565   63427 start.go:293] postStartSetup for "no-preload-480205" (driver="kvm2")
	I1009 20:17:55.325576   63427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:55.325596   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.325884   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:55.325911   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.328326   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328595   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.328622   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.328920   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.329086   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.329197   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.413322   63427 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:55.417428   63427 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:55.417451   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:55.417531   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:55.417634   63427 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:55.417758   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:55.426893   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:55.451335   63427 start.go:296] duration metric: took 125.757549ms for postStartSetup
	I1009 20:17:55.451372   63427 fix.go:56] duration metric: took 18.931252408s for fixHost
	I1009 20:17:55.451395   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.453854   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454177   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.454222   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454403   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.454581   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454734   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454872   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.455026   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.455241   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.455254   63427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:55.564201   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505075.515960663
	
	I1009 20:17:55.564224   63427 fix.go:216] guest clock: 1728505075.515960663
	I1009 20:17:55.564232   63427 fix.go:229] Guest: 2024-10-09 20:17:55.515960663 +0000 UTC Remote: 2024-10-09 20:17:55.451376872 +0000 UTC m=+362.436821917 (delta=64.583791ms)
	I1009 20:17:55.564249   63427 fix.go:200] guest clock delta is within tolerance: 64.583791ms
	I1009 20:17:55.564254   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 19.044164758s
	I1009 20:17:55.564274   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.564496   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:55.567139   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567524   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.567561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567654   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568134   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568307   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568372   63427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:55.568415   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.568499   63427 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:55.568524   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.571019   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571293   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571450   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571475   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571592   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571724   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571746   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.571897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571898   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572039   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.572048   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.572151   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572272   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.651437   63427 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:55.678289   63427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:55.826507   63427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:55.832338   63427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:55.832394   63427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:55.849232   63427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:55.849252   63427 start.go:495] detecting cgroup driver to use...
	I1009 20:17:55.849312   63427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:55.865490   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:55.880814   63427 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:55.880881   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:55.895380   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:55.911341   63427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:56.029690   63427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:56.206998   63427 docker.go:233] disabling docker service ...
	I1009 20:17:56.207078   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:56.223617   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:56.236949   63427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:56.357461   63427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:56.472412   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:56.486622   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:56.505189   63427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:56.505273   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.515661   63427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:56.515714   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.525699   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.535795   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.545864   63427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:56.555956   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.565864   63427 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.584950   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.596337   63427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:56.605878   63427 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:56.605945   63427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:56.618105   63427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:56.627474   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:56.763925   63427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:56.866705   63427 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:56.866766   63427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:56.871946   63427 start.go:563] Will wait 60s for crictl version
	I1009 20:17:56.871990   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:56.875978   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:56.920375   63427 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:56.920497   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.950584   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.983562   63427 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:54.883016   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:57.380454   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.984723   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:56.987544   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.987870   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:56.987896   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.988102   63427 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:56.992229   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:57.005052   63427 kubeadm.go:883] updating cluster {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:57.005203   63427 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:57.005261   63427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:57.048383   63427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:57.048405   63427 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:57.048449   63427 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.048493   63427 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.048528   63427 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.048551   63427 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1009 20:17:57.048554   63427 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.048460   63427 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.048669   63427 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.048543   63427 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049897   63427 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.049914   63427 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049917   63427 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.049899   63427 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.049966   63427 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.049968   63427 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1009 20:17:57.210906   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.216003   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.221539   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.238277   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.249962   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.251926   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.264094   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1009 20:17:57.278956   63427 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1009 20:17:57.279003   63427 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.279053   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.326574   63427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1009 20:17:57.326623   63427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.326667   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.356980   63427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1009 20:17:57.356999   63427 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1009 20:17:57.357024   63427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.357028   63427 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.357079   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.357082   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394166   63427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1009 20:17:57.394211   63427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.394308   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394202   63427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1009 20:17:57.394363   63427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.394409   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.504627   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.504669   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.504677   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.504795   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.504866   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.504808   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.653815   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.653864   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.653922   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.653938   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.653976   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.654008   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798466   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798526   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.798603   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.798638   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.798712   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.798725   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.919528   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1009 20:17:57.919602   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1009 20:17:57.919636   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.919668   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:17:57.923759   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1009 20:17:57.923835   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1009 20:17:57.923861   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1009 20:17:57.923841   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:17:57.923900   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:17:57.923908   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1009 20:17:57.923937   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:17:57.923979   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:17:57.933344   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1009 20:17:57.933364   63427 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.933384   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1009 20:17:57.933397   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.936970   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1009 20:17:57.937013   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1009 20:17:57.937014   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1009 20:17:57.937039   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1009 20:17:54.082018   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.581605   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:55.074932   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:55.575354   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.074536   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.575341   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.074580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.574737   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.074743   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.574712   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.074570   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.575178   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.381986   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.879741   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:58.234930   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.729993   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.796562811s)
	I1009 20:18:01.730032   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1009 20:18:01.730055   63427 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730053   63427 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.495090196s)
	I1009 20:18:01.730094   63427 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1009 20:18:01.730108   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730128   63427 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.730171   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:59.082693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.581215   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:00.075413   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:00.575344   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.074463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.574495   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.075077   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.074427   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.574544   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.075436   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.575477   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.881048   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.881675   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:03.709225   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.979095477s)
	I1009 20:18:03.709263   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1009 20:18:03.709270   63427 ssh_runner.go:235] Completed: which crictl: (1.979078895s)
	I1009 20:18:03.709293   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709328   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709331   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677348   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.967992224s)
	I1009 20:18:05.677442   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677451   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.968100259s)
	I1009 20:18:05.677472   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1009 20:18:05.677506   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.677576   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.717053   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:07.172029   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.454939952s)
	I1009 20:18:07.172088   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 20:18:07.172034   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.49443869s)
	I1009 20:18:07.172161   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1009 20:18:07.172184   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:07.172184   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:07.172274   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:03.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:06.082185   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.075031   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:05.574523   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.075121   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.575359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.074417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.574532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.075315   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.575052   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.075089   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.575013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.881820   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:09.882824   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:12.381749   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:08.827862   63427 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.655655014s)
	I1009 20:18:08.827897   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.655597185s)
	I1009 20:18:08.827906   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1009 20:18:08.827911   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1009 20:18:08.827943   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:08.828002   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:11.127762   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.299736339s)
	I1009 20:18:11.127795   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1009 20:18:11.127828   63427 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.127896   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.778998   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 20:18:11.779046   63427 cache_images.go:123] Successfully loaded all cached images
	I1009 20:18:11.779052   63427 cache_images.go:92] duration metric: took 14.730635989s to LoadCachedImages
	I1009 20:18:11.779086   63427 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.1 crio true true} ...
	I1009 20:18:11.779200   63427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-480205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:18:11.779290   63427 ssh_runner.go:195] Run: crio config
	I1009 20:18:11.823810   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:11.823835   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:11.823850   63427 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:18:11.823868   63427 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-480205 NodeName:no-preload-480205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:18:11.823998   63427 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-480205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:18:11.824053   63427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:18:11.834380   63427 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:18:11.834447   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:18:11.843217   63427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:18:11.860171   63427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:18:11.877082   63427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1009 20:18:11.894719   63427 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1009 20:18:11.898508   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:11.910913   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:12.036793   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:12.054850   63427 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205 for IP: 192.168.39.162
	I1009 20:18:12.054872   63427 certs.go:194] generating shared ca certs ...
	I1009 20:18:12.054891   63427 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:12.055079   63427 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:18:12.055135   63427 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:18:12.055147   63427 certs.go:256] generating profile certs ...
	I1009 20:18:12.055233   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.key
	I1009 20:18:12.055290   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key.d4bac337
	I1009 20:18:12.055346   63427 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key
	I1009 20:18:12.055484   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:18:12.055518   63427 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:18:12.055531   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:18:12.055563   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:18:12.055589   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:18:12.055622   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:18:12.055685   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:18:12.056362   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:18:12.098363   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:18:12.138215   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:18:12.163505   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:18:12.197000   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:18:12.226922   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:18:12.260018   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:18:12.283078   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:18:12.306681   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:18:12.329290   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:12.351909   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:18:12.374738   63427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:12.392628   63427 ssh_runner.go:195] Run: openssl version
	I1009 20:18:12.398243   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:18:12.408796   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413145   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413227   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.419056   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:12.429807   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:12.440638   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445248   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445304   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.450971   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:12.461763   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:18:12.472078   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476832   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476883   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.482732   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:12.493739   63427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:12.498128   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:18:12.504533   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:18:12.510838   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:18:12.517106   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:18:12.522836   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:18:12.528387   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:18:12.533860   63427 kubeadm.go:392] StartCluster: {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:12.533939   63427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:12.533974   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.573392   63427 cri.go:89] found id: ""
	I1009 20:18:12.573459   63427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:12.584594   63427 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:18:12.584615   63427 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:18:12.584660   63427 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:18:12.595656   63427 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:18:12.596797   63427 kubeconfig.go:125] found "no-preload-480205" server: "https://192.168.39.162:8443"
	I1009 20:18:12.598877   63427 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:18:12.608274   63427 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1009 20:18:12.608299   63427 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:18:12.608310   63427 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:18:12.608369   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.644925   63427 cri.go:89] found id: ""
	I1009 20:18:12.644992   63427 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:18:12.661468   63427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:18:12.671087   63427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:18:12.671107   63427 kubeadm.go:157] found existing configuration files:
	
	I1009 20:18:12.671152   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:18:12.679852   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:18:12.679915   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:18:12.688829   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:18:12.697279   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:18:12.697334   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:18:12.705785   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.714620   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:18:12.714657   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.722966   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:18:12.730999   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:18:12.731047   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:18:12.739970   63427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:18:12.748980   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:12.857890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:08.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:11.081976   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:10.075093   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.574417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.075214   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.574669   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.075388   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.575377   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.075087   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.574793   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.074494   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.574845   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.880777   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:17.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:13.727010   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:13.942433   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.021021   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.144829   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:18:14.144918   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.645875   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.145872   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.184998   63427 api_server.go:72] duration metric: took 1.040165861s to wait for apiserver process to appear ...
	I1009 20:18:15.185034   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:18:15.185059   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:15.185680   63427 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I1009 20:18:15.685984   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:13.581243   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:16.079884   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:18.081998   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:15.074778   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.575349   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.074510   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.074650   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.574725   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.075359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.575302   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.074611   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.575097   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.286022   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.286048   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.286066   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.311734   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.311764   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.685256   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.689903   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:18.689930   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.185432   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.191636   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:19.191661   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.685910   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.690518   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:18:19.696742   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:18:19.696769   63427 api_server.go:131] duration metric: took 4.511726583s to wait for apiserver health ...
	I1009 20:18:19.696777   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:19.696783   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:19.698684   63427 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:18:19.700003   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:18:19.712555   63427 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:18:19.731708   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:18:19.740770   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:18:19.740800   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:19.740808   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:19.740817   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:19.740823   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:19.740829   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:18:19.740835   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:19.740842   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:18:19.740848   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:18:19.740860   63427 system_pods.go:74] duration metric: took 9.132657ms to wait for pod list to return data ...
	I1009 20:18:19.740867   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:18:19.744292   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:18:19.744314   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:18:19.744329   63427 node_conditions.go:105] duration metric: took 3.45695ms to run NodePressure ...
	I1009 20:18:19.744346   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:20.036577   63427 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040661   63427 kubeadm.go:739] kubelet initialised
	I1009 20:18:20.040683   63427 kubeadm.go:740] duration metric: took 4.08281ms waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040692   63427 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:20.047699   63427 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.052483   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052504   63427 pod_ready.go:82] duration metric: took 4.782367ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.052511   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052518   63427 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.056863   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056892   63427 pod_ready.go:82] duration metric: took 4.363688ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.056903   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056911   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.061762   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061786   63427 pod_ready.go:82] duration metric: took 4.867975ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.061796   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061804   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.135742   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135769   63427 pod_ready.go:82] duration metric: took 73.952718ms for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.135779   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135785   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.534419   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534449   63427 pod_ready.go:82] duration metric: took 398.656543ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.534459   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534466   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.935390   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935416   63427 pod_ready.go:82] duration metric: took 400.943577ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.935426   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935432   63427 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:21.336052   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336081   63427 pod_ready.go:82] duration metric: took 400.640044ms for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:21.336093   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336102   63427 pod_ready.go:39] duration metric: took 1.295400779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:21.336122   63427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:18:21.349596   63427 ops.go:34] apiserver oom_adj: -16
	I1009 20:18:21.349616   63427 kubeadm.go:597] duration metric: took 8.764995466s to restartPrimaryControlPlane
	I1009 20:18:21.349624   63427 kubeadm.go:394] duration metric: took 8.815768617s to StartCluster
	I1009 20:18:21.349639   63427 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.349716   63427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:18:21.351335   63427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.351607   63427 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:21.351692   63427 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:21.351813   63427 addons.go:69] Setting storage-provisioner=true in profile "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting metrics-server=true in profile "no-preload-480205"
	I1009 20:18:21.351832   63427 addons.go:234] Setting addon storage-provisioner=true in "no-preload-480205"
	I1009 20:18:21.351836   63427 addons.go:234] Setting addon metrics-server=true in "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting default-storageclass=true in profile "no-preload-480205"
	I1009 20:18:21.351845   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:18:21.351883   63427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-480205"
	W1009 20:18:21.351840   63427 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:18:21.351986   63427 host.go:66] Checking if "no-preload-480205" exists ...
	W1009 20:18:21.351843   63427 addons.go:243] addon metrics-server should already be in state true
	I1009 20:18:21.352071   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.352345   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352389   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352398   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352424   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352457   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352489   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.353957   63427 out.go:177] * Verifying Kubernetes components...
	I1009 20:18:21.355218   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:21.371429   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I1009 20:18:21.371808   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.372342   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.372372   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.372777   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.372988   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.376878   63427 addons.go:234] Setting addon default-storageclass=true in "no-preload-480205"
	W1009 20:18:21.376899   63427 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:18:21.376926   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.377284   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.377323   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.390054   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I1009 20:18:21.390616   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I1009 20:18:21.391127   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391270   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391803   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.391830   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392008   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.392033   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392208   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392359   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392734   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.392776   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.392957   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.393001   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.397090   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1009 20:18:21.397605   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.398086   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.398105   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.398405   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.398921   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.398966   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.408719   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1009 20:18:21.408929   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1009 20:18:21.409048   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409326   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409582   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409594   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409876   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409893   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409956   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410100   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.410223   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410564   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.412097   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.412300   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.414239   63427 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:21.414326   63427 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:18:19.381608   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.415507   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:18:21.415525   63427 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.415530   63427 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:18:21.415536   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:21.415548   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.415549   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.417045   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I1009 20:18:21.417788   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.418610   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.418626   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.418981   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419016   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.419279   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.419611   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.419631   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419760   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.419897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.420028   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.420123   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.420454   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420758   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.420943   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.420963   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420969   63427 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.420989   63427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:21.421002   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.421193   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.421373   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.421545   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.421675   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.423520   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425058   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.425099   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.425124   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425247   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.425381   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.425511   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.558337   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:21.587934   63427 node_ready.go:35] waiting up to 6m0s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:21.692866   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.705177   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:18:21.705201   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:18:21.724872   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.796761   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:18:21.796789   63427 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:18:21.846162   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:21.846187   63427 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:18:21.880785   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:22.146852   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.146879   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147190   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147241   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147254   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.147266   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.147280   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147532   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147534   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147591   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.161873   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.161893   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.162134   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.162156   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.162162   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966531   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24162682s)
	I1009 20:18:22.966588   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966603   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966536   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.085706223s)
	I1009 20:18:22.966699   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966712   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966892   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.966932   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.966939   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966947   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966954   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967001   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967020   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967040   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967073   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.967086   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967234   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967258   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967332   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967342   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967356   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967365   63427 addons.go:475] Verifying addon metrics-server=true in "no-preload-480205"
	I1009 20:18:22.969240   63427 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1009 20:18:22.970479   63427 addons.go:510] duration metric: took 1.618800365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1009 20:18:20.580980   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:22.581407   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:20.075155   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:20.575362   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.074859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.574637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.074532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.574916   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.075357   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.574640   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.074579   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.574711   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.879983   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:26.380696   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:23.592071   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:26.091763   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:24.581861   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:27.082730   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:25.075032   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:25.575412   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.075470   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.574434   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.074827   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.074653   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.575222   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.075440   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.575192   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.880597   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:28.592011   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:29.091688   63427 node_ready.go:49] node "no-preload-480205" has status "Ready":"True"
	I1009 20:18:29.091710   63427 node_ready.go:38] duration metric: took 7.503746219s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:29.091719   63427 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:29.097050   63427 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101164   63427 pod_ready.go:93] pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.101185   63427 pod_ready.go:82] duration metric: took 4.107489ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101195   63427 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105318   63427 pod_ready.go:93] pod "etcd-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.105337   63427 pod_ready.go:82] duration metric: took 4.133854ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105348   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108895   63427 pod_ready.go:93] pod "kube-apiserver-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.108910   63427 pod_ready.go:82] duration metric: took 3.556306ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108920   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.114777   63427 pod_ready.go:103] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.615669   63427 pod_ready.go:93] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.615692   63427 pod_ready.go:82] duration metric: took 2.506765342s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.615703   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620649   63427 pod_ready.go:93] pod "kube-proxy-vbpbk" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.620670   63427 pod_ready.go:82] duration metric: took 4.959968ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620682   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892060   63427 pod_ready.go:93] pod "kube-scheduler-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.892081   63427 pod_ready.go:82] duration metric: took 271.38787ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892089   63427 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.580683   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.581273   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.075304   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:30.574688   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.075159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.574404   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.074889   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.575136   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.074459   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.574779   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.074797   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.574832   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.380854   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.880599   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.899462   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.397489   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.582344   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.081582   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.074501   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:35.574403   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.075399   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.575034   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.074714   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.574446   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.074619   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.574644   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.074530   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.574700   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.881601   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.380041   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.380712   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.397848   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.398202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.400630   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.582883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:41.080905   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.074863   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:40.575174   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.075008   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.574859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.074972   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.574851   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.074805   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.575033   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.074718   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.575423   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.880876   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.881328   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:44.898897   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:47.399335   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:43.581383   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.081078   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:48.081422   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:45.074591   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:45.575195   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.075303   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.575186   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:46.575288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:46.614320   64287 cri.go:89] found id: ""
	I1009 20:18:46.614343   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.614351   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:46.614357   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:46.614402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:46.646355   64287 cri.go:89] found id: ""
	I1009 20:18:46.646384   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.646395   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:46.646403   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:46.646450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:46.678758   64287 cri.go:89] found id: ""
	I1009 20:18:46.678788   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.678798   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:46.678805   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:46.678859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:46.721469   64287 cri.go:89] found id: ""
	I1009 20:18:46.721496   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.721507   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:46.721514   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:46.721573   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:46.759822   64287 cri.go:89] found id: ""
	I1009 20:18:46.759853   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.759861   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:46.759866   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:46.759923   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:46.798221   64287 cri.go:89] found id: ""
	I1009 20:18:46.798250   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.798261   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:46.798268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:46.798327   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:46.832044   64287 cri.go:89] found id: ""
	I1009 20:18:46.832067   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.832075   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:46.832080   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:46.832143   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:46.865003   64287 cri.go:89] found id: ""
	I1009 20:18:46.865030   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.865041   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:46.865051   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:46.865066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:46.916927   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:46.916964   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:46.930547   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:46.930576   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:47.042476   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:47.042501   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:47.042516   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:47.116701   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:47.116732   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:48.888593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:51.380593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.899106   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:52.397825   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:50.580775   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:53.081256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.659335   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:49.672837   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:49.672906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:49.709722   64287 cri.go:89] found id: ""
	I1009 20:18:49.709750   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.709761   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:49.709769   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:49.709827   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:49.741187   64287 cri.go:89] found id: ""
	I1009 20:18:49.741209   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.741216   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:49.741221   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:49.741278   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:49.782564   64287 cri.go:89] found id: ""
	I1009 20:18:49.782593   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.782603   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:49.782610   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:49.782667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:49.820586   64287 cri.go:89] found id: ""
	I1009 20:18:49.820618   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.820628   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:49.820634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:49.820688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:49.854573   64287 cri.go:89] found id: ""
	I1009 20:18:49.854600   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.854608   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:49.854615   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:49.854672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:49.889947   64287 cri.go:89] found id: ""
	I1009 20:18:49.889976   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.889986   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:49.889993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:49.890049   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:49.925309   64287 cri.go:89] found id: ""
	I1009 20:18:49.925339   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.925350   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:49.925357   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:49.925432   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:49.961993   64287 cri.go:89] found id: ""
	I1009 20:18:49.962019   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.962029   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:49.962039   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:49.962053   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:50.051610   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:50.051642   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:50.092363   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:50.092388   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:50.145606   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:50.145639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:50.160017   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:50.160047   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:50.231984   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:52.733040   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:52.748018   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:52.748075   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:52.789413   64287 cri.go:89] found id: ""
	I1009 20:18:52.789440   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.789452   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:52.789458   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:52.789514   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:52.823188   64287 cri.go:89] found id: ""
	I1009 20:18:52.823219   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.823229   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:52.823237   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:52.823305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:52.858675   64287 cri.go:89] found id: ""
	I1009 20:18:52.858704   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.858716   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:52.858724   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:52.858782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:52.893243   64287 cri.go:89] found id: ""
	I1009 20:18:52.893277   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.893287   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:52.893295   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:52.893363   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:52.928209   64287 cri.go:89] found id: ""
	I1009 20:18:52.928240   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.928248   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:52.928255   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:52.928314   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:52.962418   64287 cri.go:89] found id: ""
	I1009 20:18:52.962446   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.962455   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:52.962461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:52.962510   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:52.996276   64287 cri.go:89] found id: ""
	I1009 20:18:52.996304   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.996315   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:52.996322   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:52.996380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:53.029693   64287 cri.go:89] found id: ""
	I1009 20:18:53.029718   64287 logs.go:282] 0 containers: []
	W1009 20:18:53.029728   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:53.029738   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:53.029752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:53.042690   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:53.042713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:53.114114   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:53.114132   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:53.114143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:53.192280   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:53.192314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:53.230392   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:53.230416   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:53.380621   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.881245   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:54.399437   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:56.900141   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.580802   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:58.082285   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.781562   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:55.795951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:55.796017   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:55.836037   64287 cri.go:89] found id: ""
	I1009 20:18:55.836065   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.836074   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:55.836080   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:55.836126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:55.870534   64287 cri.go:89] found id: ""
	I1009 20:18:55.870564   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.870574   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:55.870580   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:55.870647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:55.906415   64287 cri.go:89] found id: ""
	I1009 20:18:55.906438   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.906447   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:55.906454   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:55.906507   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:55.943387   64287 cri.go:89] found id: ""
	I1009 20:18:55.943414   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.943424   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:55.943431   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:55.943489   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:55.977004   64287 cri.go:89] found id: ""
	I1009 20:18:55.977027   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.977036   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:55.977044   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:55.977120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:56.015608   64287 cri.go:89] found id: ""
	I1009 20:18:56.015634   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.015648   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:56.015654   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:56.015703   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:56.049324   64287 cri.go:89] found id: ""
	I1009 20:18:56.049355   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.049366   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:56.049375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:56.049428   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:56.084914   64287 cri.go:89] found id: ""
	I1009 20:18:56.084937   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.084946   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:56.084955   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:56.084975   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:56.098176   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:56.098197   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:56.178386   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:56.178403   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:56.178414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:56.256547   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:56.256582   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:56.294138   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:56.294170   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:58.851568   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:58.865845   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:58.865902   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:58.904144   64287 cri.go:89] found id: ""
	I1009 20:18:58.904169   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.904177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:58.904194   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:58.904267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:58.936739   64287 cri.go:89] found id: ""
	I1009 20:18:58.936769   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.936780   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:58.936790   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:58.936848   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:58.971592   64287 cri.go:89] found id: ""
	I1009 20:18:58.971623   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.971631   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:58.971638   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:58.971690   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:59.007176   64287 cri.go:89] found id: ""
	I1009 20:18:59.007205   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.007228   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:59.007234   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:59.007283   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:59.041760   64287 cri.go:89] found id: ""
	I1009 20:18:59.041789   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.041800   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:59.041807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:59.041865   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:59.077912   64287 cri.go:89] found id: ""
	I1009 20:18:59.077940   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.077951   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:59.077958   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:59.078014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:59.110669   64287 cri.go:89] found id: ""
	I1009 20:18:59.110701   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.110712   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:59.110720   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:59.110799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:59.144869   64287 cri.go:89] found id: ""
	I1009 20:18:59.144897   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.144907   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:59.144917   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:59.144952   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:59.229014   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:59.229054   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:59.272687   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:59.272725   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:59.328090   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:59.328123   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:59.342264   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:59.342294   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:59.419880   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:58.379973   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.381314   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.382266   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:59.398378   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.898047   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.581003   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.581660   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.920869   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:01.933620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:01.933685   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:01.967549   64287 cri.go:89] found id: ""
	I1009 20:19:01.967577   64287 logs.go:282] 0 containers: []
	W1009 20:19:01.967585   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:01.967590   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:01.967675   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:02.005465   64287 cri.go:89] found id: ""
	I1009 20:19:02.005491   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.005500   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:02.005505   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:02.005558   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:02.038140   64287 cri.go:89] found id: ""
	I1009 20:19:02.038162   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.038170   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:02.038176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:02.038219   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:02.070394   64287 cri.go:89] found id: ""
	I1009 20:19:02.070423   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.070434   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:02.070442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:02.070505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:02.110634   64287 cri.go:89] found id: ""
	I1009 20:19:02.110655   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.110663   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:02.110669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:02.110723   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:02.166408   64287 cri.go:89] found id: ""
	I1009 20:19:02.166445   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.166457   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:02.166467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:02.166541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:02.218816   64287 cri.go:89] found id: ""
	I1009 20:19:02.218846   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.218856   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:02.218862   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:02.218914   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:02.265090   64287 cri.go:89] found id: ""
	I1009 20:19:02.265118   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.265130   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:02.265140   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:02.265156   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:02.278134   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:02.278160   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:02.348422   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:02.348453   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:02.348467   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:02.429614   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:02.429651   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:02.469100   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:02.469132   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:04.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.881374   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:04.397774   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.402923   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.081386   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:07.580670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.020914   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:05.034760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:05.034833   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:05.071078   64287 cri.go:89] found id: ""
	I1009 20:19:05.071109   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.071120   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:05.071128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:05.071190   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:05.105517   64287 cri.go:89] found id: ""
	I1009 20:19:05.105545   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.105553   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:05.105558   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:05.105607   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:05.139601   64287 cri.go:89] found id: ""
	I1009 20:19:05.139624   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.139632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:05.139637   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:05.139682   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:05.174329   64287 cri.go:89] found id: ""
	I1009 20:19:05.174351   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.174359   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:05.174365   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:05.174410   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:05.212336   64287 cri.go:89] found id: ""
	I1009 20:19:05.212368   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.212377   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:05.212383   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:05.212464   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:05.251822   64287 cri.go:89] found id: ""
	I1009 20:19:05.251844   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.251851   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:05.251857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:05.251901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:05.291055   64287 cri.go:89] found id: ""
	I1009 20:19:05.291097   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.291106   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:05.291111   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:05.291160   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:05.327223   64287 cri.go:89] found id: ""
	I1009 20:19:05.327248   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.327256   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:05.327266   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:05.327281   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:05.377047   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:05.377086   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:05.391232   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:05.391263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:05.464815   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:05.464837   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:05.464850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:05.542581   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:05.542616   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:08.084504   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:08.100466   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:08.100535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:08.138451   64287 cri.go:89] found id: ""
	I1009 20:19:08.138481   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.138489   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:08.138494   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:08.138551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:08.176839   64287 cri.go:89] found id: ""
	I1009 20:19:08.176867   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.176877   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:08.176884   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:08.176941   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:08.234435   64287 cri.go:89] found id: ""
	I1009 20:19:08.234461   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.234472   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:08.234479   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:08.234544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:08.270727   64287 cri.go:89] found id: ""
	I1009 20:19:08.270753   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.270764   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:08.270771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:08.270831   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:08.305139   64287 cri.go:89] found id: ""
	I1009 20:19:08.305167   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.305177   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:08.305185   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:08.305237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:08.338153   64287 cri.go:89] found id: ""
	I1009 20:19:08.338197   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.338209   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:08.338217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:08.338272   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:08.376046   64287 cri.go:89] found id: ""
	I1009 20:19:08.376073   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.376081   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:08.376087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:08.376144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:08.416555   64287 cri.go:89] found id: ""
	I1009 20:19:08.416595   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.416606   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:08.416617   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:08.416630   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:08.470868   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:08.470898   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:08.486601   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:08.486623   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:08.563325   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:08.563363   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:08.563378   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:08.643743   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:08.643778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:09.380849   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.881773   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:08.898969   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.399277   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:09.580913   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.581693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.197637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:11.210992   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:11.211078   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:11.248309   64287 cri.go:89] found id: ""
	I1009 20:19:11.248331   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.248339   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:11.248345   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:11.248388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:11.282511   64287 cri.go:89] found id: ""
	I1009 20:19:11.282537   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.282546   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:11.282551   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:11.282603   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:11.319447   64287 cri.go:89] found id: ""
	I1009 20:19:11.319473   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.319480   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:11.319486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:11.319543   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:11.353838   64287 cri.go:89] found id: ""
	I1009 20:19:11.353866   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.353879   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:11.353887   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:11.353951   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:11.395257   64287 cri.go:89] found id: ""
	I1009 20:19:11.395288   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.395300   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:11.395309   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:11.395373   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:11.434406   64287 cri.go:89] found id: ""
	I1009 20:19:11.434430   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.434438   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:11.434445   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:11.434506   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:11.468162   64287 cri.go:89] found id: ""
	I1009 20:19:11.468184   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.468192   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:11.468197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:11.468252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:11.500214   64287 cri.go:89] found id: ""
	I1009 20:19:11.500247   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.500257   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:11.500267   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:11.500282   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:11.566430   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:11.566449   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:11.566463   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:11.642784   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:11.642815   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:11.680882   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:11.680908   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:11.731386   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:11.731414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.245696   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:14.258882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:14.258948   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:14.293339   64287 cri.go:89] found id: ""
	I1009 20:19:14.293365   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.293372   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:14.293379   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:14.293424   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:14.327246   64287 cri.go:89] found id: ""
	I1009 20:19:14.327268   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.327275   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:14.327287   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:14.327334   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:14.366384   64287 cri.go:89] found id: ""
	I1009 20:19:14.366412   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.366423   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:14.366430   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:14.366498   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:14.403913   64287 cri.go:89] found id: ""
	I1009 20:19:14.403950   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.403958   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:14.403965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:14.404021   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:14.442655   64287 cri.go:89] found id: ""
	I1009 20:19:14.442684   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.442694   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:14.442702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:14.442749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:14.477895   64287 cri.go:89] found id: ""
	I1009 20:19:14.477921   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.477928   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:14.477934   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:14.477979   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:14.512833   64287 cri.go:89] found id: ""
	I1009 20:19:14.512871   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.512882   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:14.512889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:14.512955   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:14.546557   64287 cri.go:89] found id: ""
	I1009 20:19:14.546582   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.546590   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:14.546597   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:14.546610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:14.599579   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:14.599610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.613347   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:14.613371   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:14.380816   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.879793   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.399353   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:15.899223   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.584162   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.081179   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:14.689272   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:14.689295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:14.689306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:14.770362   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:14.770394   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:17.312105   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:17.326851   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:17.326906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:17.364760   64287 cri.go:89] found id: ""
	I1009 20:19:17.364785   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.364793   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:17.364799   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:17.364851   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:17.398149   64287 cri.go:89] found id: ""
	I1009 20:19:17.398172   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.398181   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:17.398189   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:17.398247   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:17.432746   64287 cri.go:89] found id: ""
	I1009 20:19:17.432778   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.432789   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:17.432797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:17.432846   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:17.468095   64287 cri.go:89] found id: ""
	I1009 20:19:17.468125   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.468137   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:17.468145   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:17.468206   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:17.503152   64287 cri.go:89] found id: ""
	I1009 20:19:17.503184   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.503196   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:17.503203   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:17.503257   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:17.543966   64287 cri.go:89] found id: ""
	I1009 20:19:17.543993   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.544002   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:17.544008   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:17.544077   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:17.582780   64287 cri.go:89] found id: ""
	I1009 20:19:17.582801   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.582809   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:17.582814   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:17.582860   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:17.621907   64287 cri.go:89] found id: ""
	I1009 20:19:17.621933   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.621942   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:17.621951   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:17.621963   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:17.674239   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:17.674271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:17.688301   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:17.688331   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:17.759965   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:17.759989   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:17.760005   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:17.836052   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:17.836087   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:18.880033   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:21.381550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.399116   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.898441   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:22.899243   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.581486   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:23.081145   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.380237   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:20.393343   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:20.393409   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:20.427462   64287 cri.go:89] found id: ""
	I1009 20:19:20.427491   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.427501   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:20.427509   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:20.427560   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:20.463708   64287 cri.go:89] found id: ""
	I1009 20:19:20.463736   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.463747   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:20.463754   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:20.463818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:20.497898   64287 cri.go:89] found id: ""
	I1009 20:19:20.497924   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.497931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:20.497937   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:20.497985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:20.531880   64287 cri.go:89] found id: ""
	I1009 20:19:20.531910   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.531918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:20.531923   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:20.531971   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:20.565291   64287 cri.go:89] found id: ""
	I1009 20:19:20.565319   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.565330   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:20.565342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:20.565390   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:20.604786   64287 cri.go:89] found id: ""
	I1009 20:19:20.604815   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.604827   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:20.604835   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:20.604891   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:20.646136   64287 cri.go:89] found id: ""
	I1009 20:19:20.646161   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.646169   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:20.646175   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:20.646231   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:20.687503   64287 cri.go:89] found id: ""
	I1009 20:19:20.687527   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.687540   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:20.687548   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:20.687560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:20.738026   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:20.738057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:20.751432   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:20.751459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:20.826192   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:20.826219   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:20.826239   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:20.905874   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:20.905900   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.445277   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:23.460245   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:23.460305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:23.503559   64287 cri.go:89] found id: ""
	I1009 20:19:23.503582   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.503590   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:23.503596   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:23.503652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:23.542748   64287 cri.go:89] found id: ""
	I1009 20:19:23.542783   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.542791   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:23.542797   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:23.542857   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:23.585668   64287 cri.go:89] found id: ""
	I1009 20:19:23.585689   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.585696   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:23.585702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:23.585753   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:23.623863   64287 cri.go:89] found id: ""
	I1009 20:19:23.623884   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.623891   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:23.623897   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:23.623952   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:23.657025   64287 cri.go:89] found id: ""
	I1009 20:19:23.657049   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.657057   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:23.657063   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:23.657120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:23.692536   64287 cri.go:89] found id: ""
	I1009 20:19:23.692573   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.692583   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:23.692590   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:23.692657   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:23.732552   64287 cri.go:89] found id: ""
	I1009 20:19:23.732580   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.732591   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:23.732599   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:23.732645   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:23.767308   64287 cri.go:89] found id: ""
	I1009 20:19:23.767345   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.767356   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:23.767366   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:23.767380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:23.780909   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:23.780948   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:23.853312   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:23.853340   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:23.853355   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:23.934930   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:23.934968   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.977906   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:23.977943   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:23.881669   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.380447   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.397833   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.398843   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.082071   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.580992   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.530146   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:26.545527   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:26.545598   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:26.580942   64287 cri.go:89] found id: ""
	I1009 20:19:26.580970   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.580981   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:26.580988   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:26.581050   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:26.621165   64287 cri.go:89] found id: ""
	I1009 20:19:26.621188   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.621195   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:26.621201   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:26.621245   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:26.655664   64287 cri.go:89] found id: ""
	I1009 20:19:26.655690   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.655697   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:26.655703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:26.655749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:26.691951   64287 cri.go:89] found id: ""
	I1009 20:19:26.691973   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.691981   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:26.691987   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:26.692033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:26.728905   64287 cri.go:89] found id: ""
	I1009 20:19:26.728937   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.728948   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:26.728955   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:26.729013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:26.763673   64287 cri.go:89] found id: ""
	I1009 20:19:26.763697   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.763705   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:26.763711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:26.763765   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:26.798507   64287 cri.go:89] found id: ""
	I1009 20:19:26.798535   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.798547   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:26.798554   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:26.798615   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:26.836114   64287 cri.go:89] found id: ""
	I1009 20:19:26.836140   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.836148   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:26.836156   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:26.836169   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:26.914136   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:26.914160   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:26.914175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:26.995023   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:26.995055   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:27.033788   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:27.033817   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:27.084313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:27.084341   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.597899   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:29.611695   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:29.611756   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:28.381564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.881085   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.899697   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.398514   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.081670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.580939   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.646690   64287 cri.go:89] found id: ""
	I1009 20:19:29.646718   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.646726   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:29.646732   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:29.646780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:29.681379   64287 cri.go:89] found id: ""
	I1009 20:19:29.681408   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.681418   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:29.681425   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:29.681481   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:29.717988   64287 cri.go:89] found id: ""
	I1009 20:19:29.718012   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.718020   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:29.718026   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:29.718076   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:29.752783   64287 cri.go:89] found id: ""
	I1009 20:19:29.752815   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.752825   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:29.752833   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:29.752883   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:29.786079   64287 cri.go:89] found id: ""
	I1009 20:19:29.786105   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.786114   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:29.786120   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:29.786167   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:29.820630   64287 cri.go:89] found id: ""
	I1009 20:19:29.820655   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.820663   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:29.820669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:29.820727   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:29.855992   64287 cri.go:89] found id: ""
	I1009 20:19:29.856022   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.856033   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:29.856040   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:29.856096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:29.891196   64287 cri.go:89] found id: ""
	I1009 20:19:29.891224   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.891234   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:29.891244   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:29.891257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:29.945636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:29.945665   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.959715   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:29.959741   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:30.034023   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:30.034046   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:30.034066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:30.109512   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:30.109545   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.651252   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:32.665196   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:32.665253   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:32.701468   64287 cri.go:89] found id: ""
	I1009 20:19:32.701497   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.701516   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:32.701525   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:32.701581   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:32.740585   64287 cri.go:89] found id: ""
	I1009 20:19:32.740611   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.740623   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:32.740629   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:32.740699   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:32.773765   64287 cri.go:89] found id: ""
	I1009 20:19:32.773792   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.773803   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:32.773810   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:32.773869   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:32.812647   64287 cri.go:89] found id: ""
	I1009 20:19:32.812680   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.812695   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:32.812702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:32.812752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:32.847044   64287 cri.go:89] found id: ""
	I1009 20:19:32.847092   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.847101   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:32.847107   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:32.847153   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:32.885410   64287 cri.go:89] found id: ""
	I1009 20:19:32.885439   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.885448   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:32.885455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:32.885515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:32.922917   64287 cri.go:89] found id: ""
	I1009 20:19:32.922944   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.922955   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:32.922963   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:32.923026   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:32.958993   64287 cri.go:89] found id: ""
	I1009 20:19:32.959019   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.959027   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:32.959037   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:32.959052   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.996844   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:32.996871   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:33.047684   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:33.047715   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:33.061829   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:33.061856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:33.135278   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:33.135302   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:33.135314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:33.380221   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.380648   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:34.897646   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:36.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.081326   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:37.580347   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.722479   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:35.736670   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:35.736745   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:35.778594   64287 cri.go:89] found id: ""
	I1009 20:19:35.778617   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.778625   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:35.778630   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:35.778677   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:35.810906   64287 cri.go:89] found id: ""
	I1009 20:19:35.810934   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.810945   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:35.810954   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:35.811014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:35.846226   64287 cri.go:89] found id: ""
	I1009 20:19:35.846258   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.846269   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:35.846277   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:35.846325   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:35.880509   64287 cri.go:89] found id: ""
	I1009 20:19:35.880536   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.880547   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:35.880555   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:35.880613   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:35.916039   64287 cri.go:89] found id: ""
	I1009 20:19:35.916067   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.916077   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:35.916085   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:35.916142   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:35.948068   64287 cri.go:89] found id: ""
	I1009 20:19:35.948099   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.948107   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:35.948113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:35.948168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:35.982531   64287 cri.go:89] found id: ""
	I1009 20:19:35.982556   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.982565   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:35.982571   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:35.982618   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:36.016284   64287 cri.go:89] found id: ""
	I1009 20:19:36.016307   64287 logs.go:282] 0 containers: []
	W1009 20:19:36.016314   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:36.016324   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:36.016333   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:36.096773   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:36.096807   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:36.135382   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:36.135408   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:36.189157   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:36.189189   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:36.202243   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:36.202272   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:36.289968   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:38.790894   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:38.804960   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:38.805020   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:38.840867   64287 cri.go:89] found id: ""
	I1009 20:19:38.840891   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.840898   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:38.840904   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:38.840961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:38.877659   64287 cri.go:89] found id: ""
	I1009 20:19:38.877686   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.877695   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:38.877709   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:38.877768   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:38.917914   64287 cri.go:89] found id: ""
	I1009 20:19:38.917938   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.917947   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:38.917954   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:38.918011   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:38.955879   64287 cri.go:89] found id: ""
	I1009 20:19:38.955907   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.955918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:38.955925   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:38.955985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:38.991683   64287 cri.go:89] found id: ""
	I1009 20:19:38.991712   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.991723   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:38.991730   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:38.991815   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:39.026167   64287 cri.go:89] found id: ""
	I1009 20:19:39.026192   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.026199   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:39.026205   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:39.026273   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:39.061646   64287 cri.go:89] found id: ""
	I1009 20:19:39.061676   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.061692   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:39.061699   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:39.061760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:39.097660   64287 cri.go:89] found id: ""
	I1009 20:19:39.097687   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.097696   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:39.097706   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:39.097720   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:39.149199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:39.149232   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:39.162366   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:39.162391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:39.237267   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:39.237295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:39.237310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:39.320531   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:39.320566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:37.882355   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:40.380792   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.381234   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:38.899362   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.397980   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:39.580565   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.081212   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.865807   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:41.880948   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:41.881015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:41.917675   64287 cri.go:89] found id: ""
	I1009 20:19:41.917703   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.917714   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:41.917722   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:41.917780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:41.957152   64287 cri.go:89] found id: ""
	I1009 20:19:41.957180   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.957189   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:41.957194   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:41.957250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:42.008129   64287 cri.go:89] found id: ""
	I1009 20:19:42.008153   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.008162   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:42.008170   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:42.008232   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:42.042628   64287 cri.go:89] found id: ""
	I1009 20:19:42.042651   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.042658   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:42.042669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:42.042712   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:42.080123   64287 cri.go:89] found id: ""
	I1009 20:19:42.080147   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.080155   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:42.080161   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:42.080214   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:42.120070   64287 cri.go:89] found id: ""
	I1009 20:19:42.120099   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.120108   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:42.120114   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:42.120161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:42.153686   64287 cri.go:89] found id: ""
	I1009 20:19:42.153717   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.153727   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:42.153735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:42.153805   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:42.187793   64287 cri.go:89] found id: ""
	I1009 20:19:42.187820   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.187832   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:42.187842   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:42.187856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:42.267510   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:42.267545   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:42.267559   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:42.348061   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:42.348095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:42.393407   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:42.393431   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:42.448547   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:42.448580   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:44.381312   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:46.881511   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:43.398743   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:45.398982   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.898041   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.081990   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.963603   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:44.977341   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:44.977417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:45.018729   64287 cri.go:89] found id: ""
	I1009 20:19:45.018756   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.018764   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:45.018770   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:45.018821   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:45.055232   64287 cri.go:89] found id: ""
	I1009 20:19:45.055259   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.055267   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:45.055273   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:45.055332   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:45.090575   64287 cri.go:89] found id: ""
	I1009 20:19:45.090604   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.090614   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:45.090620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:45.090692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:45.126426   64287 cri.go:89] found id: ""
	I1009 20:19:45.126452   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.126459   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:45.126465   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:45.126523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:45.166192   64287 cri.go:89] found id: ""
	I1009 20:19:45.166223   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.166232   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:45.166239   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:45.166301   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:45.200353   64287 cri.go:89] found id: ""
	I1009 20:19:45.200384   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.200400   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:45.200406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:45.200454   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:45.235696   64287 cri.go:89] found id: ""
	I1009 20:19:45.235729   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.235740   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:45.235747   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:45.235807   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:45.271937   64287 cri.go:89] found id: ""
	I1009 20:19:45.271969   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.271979   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:45.271990   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:45.272004   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:45.347600   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:45.347635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:45.392203   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:45.392229   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:45.444012   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:45.444045   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:45.458106   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:45.458130   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:45.540275   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.041410   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:48.057834   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:48.057889   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:48.094318   64287 cri.go:89] found id: ""
	I1009 20:19:48.094346   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.094355   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:48.094362   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:48.094406   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:48.129645   64287 cri.go:89] found id: ""
	I1009 20:19:48.129672   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.129683   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:48.129691   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:48.129743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:48.164423   64287 cri.go:89] found id: ""
	I1009 20:19:48.164446   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.164454   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:48.164460   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:48.164519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:48.197708   64287 cri.go:89] found id: ""
	I1009 20:19:48.197736   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.197745   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:48.197750   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:48.197796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:48.235885   64287 cri.go:89] found id: ""
	I1009 20:19:48.235913   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.235925   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:48.235931   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:48.235995   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:48.272458   64287 cri.go:89] found id: ""
	I1009 20:19:48.272492   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.272504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:48.272513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:48.272580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:48.307152   64287 cri.go:89] found id: ""
	I1009 20:19:48.307180   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.307190   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:48.307197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:48.307255   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:48.347335   64287 cri.go:89] found id: ""
	I1009 20:19:48.347366   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.347376   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:48.347387   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:48.347401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:48.418125   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:48.418161   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:48.433361   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:48.433386   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:48.524863   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.524879   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:48.524890   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:48.612196   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:48.612247   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:49.380735   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.898962   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.899005   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.581882   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.582193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.149683   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:51.164603   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:51.164663   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:51.197120   64287 cri.go:89] found id: ""
	I1009 20:19:51.197151   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.197162   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:51.197170   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:51.197228   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:51.233612   64287 cri.go:89] found id: ""
	I1009 20:19:51.233641   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.233651   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:51.233660   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:51.233726   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:51.267119   64287 cri.go:89] found id: ""
	I1009 20:19:51.267150   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.267159   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:51.267168   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:51.267233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:51.301816   64287 cri.go:89] found id: ""
	I1009 20:19:51.301845   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.301854   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:51.301859   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:51.301917   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:51.335483   64287 cri.go:89] found id: ""
	I1009 20:19:51.335524   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.335535   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:51.335543   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:51.335604   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:51.370207   64287 cri.go:89] found id: ""
	I1009 20:19:51.370241   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.370252   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:51.370258   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:51.370320   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:51.406925   64287 cri.go:89] found id: ""
	I1009 20:19:51.406949   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.406956   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:51.406962   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:51.407015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:51.446354   64287 cri.go:89] found id: ""
	I1009 20:19:51.446378   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.446386   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:51.446394   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:51.446405   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:51.496627   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:51.496657   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:51.509587   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:51.509610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:51.583276   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:51.583295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:51.583306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:51.661552   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:51.661584   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:54.202782   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:54.227761   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:54.227829   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:54.261338   64287 cri.go:89] found id: ""
	I1009 20:19:54.261366   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.261374   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:54.261381   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:54.261435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:54.300387   64287 cri.go:89] found id: ""
	I1009 20:19:54.300414   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.300424   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:54.300429   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:54.300485   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:54.339083   64287 cri.go:89] found id: ""
	I1009 20:19:54.339110   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.339122   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:54.339129   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:54.339180   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:54.374145   64287 cri.go:89] found id: ""
	I1009 20:19:54.374174   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.374182   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:54.374188   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:54.374240   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:54.411872   64287 cri.go:89] found id: ""
	I1009 20:19:54.411904   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.411918   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:54.411926   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:54.411992   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:54.449459   64287 cri.go:89] found id: ""
	I1009 20:19:54.449493   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.449504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:54.449512   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:54.449575   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:54.482728   64287 cri.go:89] found id: ""
	I1009 20:19:54.482752   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.482762   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:54.482770   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:54.482830   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:54.516220   64287 cri.go:89] found id: ""
	I1009 20:19:54.516252   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.516261   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:54.516270   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:54.516280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:54.569531   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:54.569560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:54.583371   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:54.583395   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:53.880843   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.381025   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.399599   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.399727   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.080838   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.081451   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:54.651718   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:54.651742   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:54.651757   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:54.728869   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:54.728903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.270702   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:57.284287   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:57.284351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:57.317235   64287 cri.go:89] found id: ""
	I1009 20:19:57.317269   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.317279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:57.317290   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:57.317349   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:57.350030   64287 cri.go:89] found id: ""
	I1009 20:19:57.350058   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.350066   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:57.350071   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:57.350118   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:57.382840   64287 cri.go:89] found id: ""
	I1009 20:19:57.382867   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.382877   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:57.382884   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:57.382935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:57.417193   64287 cri.go:89] found id: ""
	I1009 20:19:57.417229   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.417239   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:57.417247   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:57.417309   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:57.456417   64287 cri.go:89] found id: ""
	I1009 20:19:57.456445   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.456454   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:57.456461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:57.456523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:57.490156   64287 cri.go:89] found id: ""
	I1009 20:19:57.490185   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.490193   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:57.490199   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:57.490246   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:57.523983   64287 cri.go:89] found id: ""
	I1009 20:19:57.524013   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.524023   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:57.524030   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:57.524093   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:57.562288   64287 cri.go:89] found id: ""
	I1009 20:19:57.562317   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.562325   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:57.562334   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:57.562345   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.602475   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:57.602502   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:57.656636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:57.656668   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:57.670738   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:57.670765   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:57.742943   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:57.742968   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:57.742979   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:58.384537   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.881670   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.897654   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.899099   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:02.899381   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.581059   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:01.081778   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.321926   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:00.335475   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:00.335546   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:00.369727   64287 cri.go:89] found id: ""
	I1009 20:20:00.369762   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.369770   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:00.369776   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:00.369823   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:00.408917   64287 cri.go:89] found id: ""
	I1009 20:20:00.408943   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.408953   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:00.408964   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:00.409013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:00.447646   64287 cri.go:89] found id: ""
	I1009 20:20:00.447676   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.447687   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:00.447694   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:00.447754   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:00.485752   64287 cri.go:89] found id: ""
	I1009 20:20:00.485780   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.485790   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:00.485797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:00.485859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:00.519568   64287 cri.go:89] found id: ""
	I1009 20:20:00.519592   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.519600   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:00.519606   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:00.519667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:00.553288   64287 cri.go:89] found id: ""
	I1009 20:20:00.553323   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.553334   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:00.553342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:00.553402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:00.593842   64287 cri.go:89] found id: ""
	I1009 20:20:00.593868   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.593875   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:00.593882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:00.593938   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:00.630808   64287 cri.go:89] found id: ""
	I1009 20:20:00.630839   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.630849   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:00.630859   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:00.630873   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:00.681858   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:00.681888   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:00.695365   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:00.695391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:00.768651   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:00.768679   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:00.768693   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:00.843999   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:00.844034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.390483   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:03.405406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:03.405476   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:03.440025   64287 cri.go:89] found id: ""
	I1009 20:20:03.440048   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.440055   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:03.440061   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:03.440113   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:03.475407   64287 cri.go:89] found id: ""
	I1009 20:20:03.475440   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.475450   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:03.475456   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:03.475511   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:03.512656   64287 cri.go:89] found id: ""
	I1009 20:20:03.512680   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.512688   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:03.512693   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:03.512749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:03.549174   64287 cri.go:89] found id: ""
	I1009 20:20:03.549204   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.549212   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:03.549217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:03.549282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:03.586093   64287 cri.go:89] found id: ""
	I1009 20:20:03.586118   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.586128   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:03.586135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:03.586201   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:03.624221   64287 cri.go:89] found id: ""
	I1009 20:20:03.624248   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.624258   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:03.624271   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:03.624342   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:03.658759   64287 cri.go:89] found id: ""
	I1009 20:20:03.658781   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.658789   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:03.658794   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:03.658850   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:03.692200   64287 cri.go:89] found id: ""
	I1009 20:20:03.692227   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.692237   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:03.692247   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:03.692263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:03.745949   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:03.745985   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:03.759691   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:03.759724   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:03.833000   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:03.833020   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:03.833034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:03.911321   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:03.911352   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.381014   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.881096   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:04.900780   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:07.398348   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:03.580442   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.582159   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:08.080528   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:06.451158   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:06.466356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:06.466435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:06.502907   64287 cri.go:89] found id: ""
	I1009 20:20:06.502936   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.502944   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:06.502950   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:06.503000   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:06.540938   64287 cri.go:89] found id: ""
	I1009 20:20:06.540961   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.540969   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:06.540974   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:06.541033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:06.575587   64287 cri.go:89] found id: ""
	I1009 20:20:06.575616   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.575632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:06.575640   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:06.575696   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:06.611052   64287 cri.go:89] found id: ""
	I1009 20:20:06.611093   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.611103   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:06.611110   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:06.611170   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:06.647763   64287 cri.go:89] found id: ""
	I1009 20:20:06.647793   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.647804   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:06.647811   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:06.647876   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:06.682423   64287 cri.go:89] found id: ""
	I1009 20:20:06.682449   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.682460   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:06.682471   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:06.682541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:06.718096   64287 cri.go:89] found id: ""
	I1009 20:20:06.718124   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.718135   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:06.718141   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:06.718200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:06.753320   64287 cri.go:89] found id: ""
	I1009 20:20:06.753344   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.753353   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:06.753361   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:06.753375   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:06.809610   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:06.809640   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:06.823651   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:06.823680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:06.895796   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:06.895819   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:06.895833   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:06.972602   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:06.972635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:09.513909   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:09.527143   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:09.527254   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:09.560406   64287 cri.go:89] found id: ""
	I1009 20:20:09.560432   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.560440   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:09.560445   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:09.560493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:09.600180   64287 cri.go:89] found id: ""
	I1009 20:20:09.600202   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.600219   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:09.600225   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:09.600285   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:08.380652   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.880056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.398968   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:11.897696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.081007   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:12.081291   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.638356   64287 cri.go:89] found id: ""
	I1009 20:20:09.638383   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.638393   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:09.638398   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:09.638450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:09.680589   64287 cri.go:89] found id: ""
	I1009 20:20:09.680616   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.680627   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:09.680635   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:09.680686   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:09.719018   64287 cri.go:89] found id: ""
	I1009 20:20:09.719041   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.719049   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:09.719054   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:09.719129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:09.757262   64287 cri.go:89] found id: ""
	I1009 20:20:09.757290   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.757298   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:09.757305   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:09.757364   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:09.796127   64287 cri.go:89] found id: ""
	I1009 20:20:09.796157   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.796168   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:09.796176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:09.796236   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:09.830650   64287 cri.go:89] found id: ""
	I1009 20:20:09.830679   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.830689   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:09.830699   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:09.830713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:09.882638   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:09.882666   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:09.897458   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:09.897488   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:09.964440   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:09.964462   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:09.964473   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:10.040103   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:10.040138   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.590159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:12.603380   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:12.603448   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:12.636246   64287 cri.go:89] found id: ""
	I1009 20:20:12.636272   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.636281   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:12.636288   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:12.636392   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:12.669400   64287 cri.go:89] found id: ""
	I1009 20:20:12.669429   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.669439   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:12.669446   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:12.669493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:12.705076   64287 cri.go:89] found id: ""
	I1009 20:20:12.705104   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.705114   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:12.705122   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:12.705198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:12.738883   64287 cri.go:89] found id: ""
	I1009 20:20:12.738914   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.738926   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:12.738933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:12.738988   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:12.773549   64287 cri.go:89] found id: ""
	I1009 20:20:12.773572   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.773580   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:12.773592   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:12.773709   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:12.813123   64287 cri.go:89] found id: ""
	I1009 20:20:12.813148   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.813156   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:12.813162   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:12.813215   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:12.851272   64287 cri.go:89] found id: ""
	I1009 20:20:12.851305   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.851317   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:12.851325   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:12.851389   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:12.891399   64287 cri.go:89] found id: ""
	I1009 20:20:12.891422   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.891429   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:12.891436   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:12.891455   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:12.945839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:12.945868   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:12.959711   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:12.959735   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:13.028015   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:13.028034   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:13.028048   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:13.108451   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:13.108491   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.881443   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.381891   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.398650   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.401925   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.580306   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.580836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.651166   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:15.664618   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:15.664692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:15.697088   64287 cri.go:89] found id: ""
	I1009 20:20:15.697117   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.697127   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:15.697137   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:15.697198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:15.738641   64287 cri.go:89] found id: ""
	I1009 20:20:15.738671   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.738682   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:15.738690   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:15.738747   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:15.771293   64287 cri.go:89] found id: ""
	I1009 20:20:15.771318   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.771326   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:15.771332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:15.771391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:15.804234   64287 cri.go:89] found id: ""
	I1009 20:20:15.804263   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.804271   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:15.804279   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:15.804329   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:15.840914   64287 cri.go:89] found id: ""
	I1009 20:20:15.840964   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.840975   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:15.840983   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:15.841041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:15.878243   64287 cri.go:89] found id: ""
	I1009 20:20:15.878270   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.878280   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:15.878288   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:15.878344   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:15.917371   64287 cri.go:89] found id: ""
	I1009 20:20:15.917398   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.917409   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:15.917416   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:15.917473   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:15.951443   64287 cri.go:89] found id: ""
	I1009 20:20:15.951470   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.951481   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:15.951490   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:15.951504   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:16.017601   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:16.017629   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:16.017643   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:16.095915   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:16.095946   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:16.141704   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:16.141737   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:16.197391   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:16.197424   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:18.712278   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:18.725451   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:18.725513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:18.757618   64287 cri.go:89] found id: ""
	I1009 20:20:18.757640   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.757650   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:18.757657   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:18.757715   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:18.791651   64287 cri.go:89] found id: ""
	I1009 20:20:18.791677   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.791686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:18.791693   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:18.791750   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:18.826402   64287 cri.go:89] found id: ""
	I1009 20:20:18.826430   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.826440   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:18.826449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:18.826522   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:18.868610   64287 cri.go:89] found id: ""
	I1009 20:20:18.868634   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.868644   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:18.868652   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:18.868710   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:18.905499   64287 cri.go:89] found id: ""
	I1009 20:20:18.905520   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.905527   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:18.905532   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:18.905588   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:18.938772   64287 cri.go:89] found id: ""
	I1009 20:20:18.938795   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.938803   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:18.938809   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:18.938855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:18.974712   64287 cri.go:89] found id: ""
	I1009 20:20:18.974742   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.974753   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:18.974760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:18.974820   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:19.008681   64287 cri.go:89] found id: ""
	I1009 20:20:19.008710   64287 logs.go:282] 0 containers: []
	W1009 20:20:19.008718   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:19.008726   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:19.008736   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:19.059862   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:19.059891   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:19.073071   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:19.073096   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:19.142163   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:19.142189   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:19.142204   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:19.226645   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:19.226691   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:17.880874   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.881553   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:18.898733   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:20.899569   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.081883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.581532   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.767167   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:21.780448   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:21.780530   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:21.813670   64287 cri.go:89] found id: ""
	I1009 20:20:21.813699   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.813708   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:21.813714   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:21.813760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:21.850793   64287 cri.go:89] found id: ""
	I1009 20:20:21.850826   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.850838   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:21.850845   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:21.850904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:21.887886   64287 cri.go:89] found id: ""
	I1009 20:20:21.887919   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.887931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:21.887938   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:21.887987   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:21.926620   64287 cri.go:89] found id: ""
	I1009 20:20:21.926651   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.926661   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:21.926669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:21.926734   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:21.962822   64287 cri.go:89] found id: ""
	I1009 20:20:21.962859   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.962867   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:21.962872   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:21.962932   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:22.001043   64287 cri.go:89] found id: ""
	I1009 20:20:22.001070   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.001080   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:22.001088   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:22.001145   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:22.034111   64287 cri.go:89] found id: ""
	I1009 20:20:22.034139   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.034148   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:22.034153   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:22.034200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:22.067601   64287 cri.go:89] found id: ""
	I1009 20:20:22.067629   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.067640   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:22.067649   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:22.067663   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:22.081545   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:22.081575   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:22.158725   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:22.158749   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:22.158761   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:22.249086   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:22.249133   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:22.287435   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:22.287462   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:24.380294   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.880564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:23.398659   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:25.399216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:27.898475   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.580818   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.838935   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:24.852057   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:24.852126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:24.887454   64287 cri.go:89] found id: ""
	I1009 20:20:24.887488   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.887500   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:24.887507   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:24.887565   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:24.928273   64287 cri.go:89] found id: ""
	I1009 20:20:24.928295   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.928303   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:24.928309   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:24.928367   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:24.962116   64287 cri.go:89] found id: ""
	I1009 20:20:24.962152   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.962164   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:24.962172   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:24.962252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:24.996909   64287 cri.go:89] found id: ""
	I1009 20:20:24.996934   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.996942   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:24.996947   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:24.996996   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:25.030615   64287 cri.go:89] found id: ""
	I1009 20:20:25.030647   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.030658   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:25.030665   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:25.030725   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:25.066069   64287 cri.go:89] found id: ""
	I1009 20:20:25.066096   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.066104   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:25.066109   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:25.066158   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:25.101762   64287 cri.go:89] found id: ""
	I1009 20:20:25.101791   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.101799   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:25.101807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:25.101854   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:25.139704   64287 cri.go:89] found id: ""
	I1009 20:20:25.139730   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.139738   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:25.139745   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:25.139756   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:25.190212   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:25.190257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:25.206181   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:25.206206   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:25.276523   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:25.276548   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:25.276562   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:25.352477   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:25.352509   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:27.894112   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:27.907965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:27.908018   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:27.942933   64287 cri.go:89] found id: ""
	I1009 20:20:27.942959   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.942967   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:27.942973   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:27.943029   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:27.995890   64287 cri.go:89] found id: ""
	I1009 20:20:27.995917   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.995929   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:27.995936   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:27.995985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:28.031877   64287 cri.go:89] found id: ""
	I1009 20:20:28.031904   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.031914   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:28.031922   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:28.031975   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:28.073691   64287 cri.go:89] found id: ""
	I1009 20:20:28.073720   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.073730   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:28.073738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:28.073796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:28.109946   64287 cri.go:89] found id: ""
	I1009 20:20:28.109975   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.109987   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:28.109995   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:28.110041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:28.144771   64287 cri.go:89] found id: ""
	I1009 20:20:28.144801   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.144822   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:28.144830   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:28.144892   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:28.179617   64287 cri.go:89] found id: ""
	I1009 20:20:28.179640   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.179647   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:28.179653   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:28.179698   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:28.213734   64287 cri.go:89] found id: ""
	I1009 20:20:28.213759   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.213767   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:28.213775   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:28.213787   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:28.227778   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:28.227803   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:28.298025   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:28.298057   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:28.298071   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:28.378664   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:28.378700   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:28.417577   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:28.417602   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:29.380480   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.382239   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.396952   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:32.399211   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:29.079718   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.083332   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.968360   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:30.981229   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:30.981295   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:31.013373   64287 cri.go:89] found id: ""
	I1009 20:20:31.013397   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.013408   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:31.013415   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:31.013468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:31.044387   64287 cri.go:89] found id: ""
	I1009 20:20:31.044408   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.044416   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:31.044421   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:31.044490   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:31.079677   64287 cri.go:89] found id: ""
	I1009 20:20:31.079702   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.079718   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:31.079727   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:31.079788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:31.118895   64287 cri.go:89] found id: ""
	I1009 20:20:31.118921   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.118933   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:31.118940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:31.118997   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:31.157008   64287 cri.go:89] found id: ""
	I1009 20:20:31.157035   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.157043   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:31.157049   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:31.157096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:31.188999   64287 cri.go:89] found id: ""
	I1009 20:20:31.189024   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.189032   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:31.189038   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:31.189095   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:31.225314   64287 cri.go:89] found id: ""
	I1009 20:20:31.225341   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.225351   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:31.225359   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:31.225426   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:31.259864   64287 cri.go:89] found id: ""
	I1009 20:20:31.259891   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.259899   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:31.259907   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:31.259918   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:31.333579   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:31.333615   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:31.375852   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:31.375884   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:31.428346   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:31.428377   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:31.442927   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:31.442951   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:31.512924   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:34.013346   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:34.026671   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:34.026729   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:34.062445   64287 cri.go:89] found id: ""
	I1009 20:20:34.062469   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.062479   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:34.062487   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:34.062586   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:34.096670   64287 cri.go:89] found id: ""
	I1009 20:20:34.096692   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.096699   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:34.096705   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:34.096752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:34.133653   64287 cri.go:89] found id: ""
	I1009 20:20:34.133682   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.133702   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:34.133711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:34.133770   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:34.167514   64287 cri.go:89] found id: ""
	I1009 20:20:34.167541   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.167552   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:34.167560   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:34.167631   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:34.200397   64287 cri.go:89] found id: ""
	I1009 20:20:34.200427   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.200438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:34.200446   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:34.200504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:34.236507   64287 cri.go:89] found id: ""
	I1009 20:20:34.236534   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.236544   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:34.236551   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:34.236611   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:34.272611   64287 cri.go:89] found id: ""
	I1009 20:20:34.272639   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.272650   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:34.272658   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:34.272733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:34.311392   64287 cri.go:89] found id: ""
	I1009 20:20:34.311417   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.311426   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:34.311434   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:34.311445   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:34.401718   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:34.401751   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:34.463768   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:34.463798   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:34.526313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:34.526347   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:34.540370   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:34.540401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:34.610697   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:33.880836   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:35.881010   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:34.399526   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.401486   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:33.581544   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.080875   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.085744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:37.111821   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:37.125012   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:37.125073   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:37.165105   64287 cri.go:89] found id: ""
	I1009 20:20:37.165135   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.165144   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:37.165151   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:37.165217   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:37.201367   64287 cri.go:89] found id: ""
	I1009 20:20:37.201393   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.201403   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:37.201412   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:37.201470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:37.234258   64287 cri.go:89] found id: ""
	I1009 20:20:37.234283   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.234291   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:37.234297   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:37.234351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:37.270765   64287 cri.go:89] found id: ""
	I1009 20:20:37.270790   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.270798   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:37.270803   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:37.270855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:37.303931   64287 cri.go:89] found id: ""
	I1009 20:20:37.303962   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.303970   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:37.303976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:37.304035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:37.339438   64287 cri.go:89] found id: ""
	I1009 20:20:37.339466   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.339476   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:37.339484   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:37.339544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:37.371538   64287 cri.go:89] found id: ""
	I1009 20:20:37.371565   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.371576   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:37.371584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:37.371644   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:37.414729   64287 cri.go:89] found id: ""
	I1009 20:20:37.414775   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.414785   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:37.414803   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:37.414818   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:37.453989   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:37.454013   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:37.504516   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:37.504551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:37.520317   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:37.520353   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:37.590144   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:37.590163   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:37.590175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:38.381407   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.381518   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.897837   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.897916   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.898202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.582744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.167604   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:40.191718   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:40.191788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:40.247439   64287 cri.go:89] found id: ""
	I1009 20:20:40.247467   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.247475   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:40.247482   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:40.247549   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:40.284012   64287 cri.go:89] found id: ""
	I1009 20:20:40.284043   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.284055   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:40.284063   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:40.284124   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:40.321347   64287 cri.go:89] found id: ""
	I1009 20:20:40.321378   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.321386   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:40.321391   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:40.321456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:40.364063   64287 cri.go:89] found id: ""
	I1009 20:20:40.364084   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.364092   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:40.364098   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:40.364152   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:40.400423   64287 cri.go:89] found id: ""
	I1009 20:20:40.400449   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.400458   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:40.400467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:40.400525   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:40.434538   64287 cri.go:89] found id: ""
	I1009 20:20:40.434567   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.434576   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:40.434584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:40.434647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:40.468860   64287 cri.go:89] found id: ""
	I1009 20:20:40.468909   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.468921   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:40.468928   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:40.468990   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:40.501583   64287 cri.go:89] found id: ""
	I1009 20:20:40.501607   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.501615   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:40.501624   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:40.501639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:40.558878   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:40.558919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:40.573191   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:40.573218   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:40.640959   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:40.640980   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:40.640996   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:40.716475   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:40.716510   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.255685   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:43.269113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:43.269182   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:43.305892   64287 cri.go:89] found id: ""
	I1009 20:20:43.305920   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.305931   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:43.305939   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:43.305999   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:43.341486   64287 cri.go:89] found id: ""
	I1009 20:20:43.341515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.341525   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:43.341532   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:43.341592   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:43.375473   64287 cri.go:89] found id: ""
	I1009 20:20:43.375496   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.375506   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:43.375513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:43.375577   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:43.411235   64287 cri.go:89] found id: ""
	I1009 20:20:43.411259   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.411268   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:43.411274   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:43.411330   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:43.444884   64287 cri.go:89] found id: ""
	I1009 20:20:43.444914   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.444926   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:43.444933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:43.444993   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:43.479151   64287 cri.go:89] found id: ""
	I1009 20:20:43.479177   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.479187   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:43.479195   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:43.479261   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:43.512485   64287 cri.go:89] found id: ""
	I1009 20:20:43.512515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.512523   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:43.512530   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:43.512580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:43.546511   64287 cri.go:89] found id: ""
	I1009 20:20:43.546533   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.546541   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:43.546549   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:43.546561   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:43.623938   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:43.623970   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.667655   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:43.667680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:43.724747   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:43.724778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:43.740060   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:43.740081   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:43.820910   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:42.880030   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:44.880596   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.880640   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.399270   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.899013   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.081796   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.580573   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.321796   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:46.337028   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:46.337086   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:46.374564   64287 cri.go:89] found id: ""
	I1009 20:20:46.374587   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.374595   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:46.374601   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:46.374662   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:46.411418   64287 cri.go:89] found id: ""
	I1009 20:20:46.411453   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.411470   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:46.411477   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:46.411535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:46.447726   64287 cri.go:89] found id: ""
	I1009 20:20:46.447750   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.447758   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:46.447763   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:46.447818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:46.484691   64287 cri.go:89] found id: ""
	I1009 20:20:46.484721   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.484731   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:46.484738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:46.484799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:46.525017   64287 cri.go:89] found id: ""
	I1009 20:20:46.525052   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.525064   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:46.525071   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:46.525129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:46.562306   64287 cri.go:89] found id: ""
	I1009 20:20:46.562334   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.562342   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:46.562350   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:46.562417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:46.598067   64287 cri.go:89] found id: ""
	I1009 20:20:46.598099   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.598110   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:46.598117   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:46.598179   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:46.639484   64287 cri.go:89] found id: ""
	I1009 20:20:46.639515   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.639526   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:46.639537   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:46.639551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:46.694106   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:46.694140   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:46.709475   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:46.709501   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:46.781281   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:46.781308   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:46.781322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:46.862224   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:46.862262   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:49.402786   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:49.417432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:49.417537   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:49.454253   64287 cri.go:89] found id: ""
	I1009 20:20:49.454286   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.454296   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:49.454305   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:49.454366   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:49.490198   64287 cri.go:89] found id: ""
	I1009 20:20:49.490223   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.490234   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:49.490241   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:49.490307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:49.524286   64287 cri.go:89] found id: ""
	I1009 20:20:49.524312   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.524322   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:49.524330   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:49.524388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:49.566415   64287 cri.go:89] found id: ""
	I1009 20:20:49.566444   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.566455   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:49.566462   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:49.566529   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:49.604306   64287 cri.go:89] found id: ""
	I1009 20:20:49.604335   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.604346   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:49.604353   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:49.604414   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:48.880756   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:51.381546   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:50.398989   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.399159   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.581256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.081420   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.638514   64287 cri.go:89] found id: ""
	I1009 20:20:49.638543   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.638560   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:49.638568   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:49.638630   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:49.672158   64287 cri.go:89] found id: ""
	I1009 20:20:49.672182   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.672191   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:49.672197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:49.672250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:49.709865   64287 cri.go:89] found id: ""
	I1009 20:20:49.709887   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.709897   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:49.709907   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:49.709919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:49.762184   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:49.762220   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:49.775852   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:49.775880   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:49.850309   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:49.850329   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:49.850343   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:49.930225   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:49.930266   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:52.470580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:52.484087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:52.484141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:52.517440   64287 cri.go:89] found id: ""
	I1009 20:20:52.517461   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.517469   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:52.517475   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:52.517519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:52.550340   64287 cri.go:89] found id: ""
	I1009 20:20:52.550380   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.550392   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:52.550399   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:52.550468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:52.586444   64287 cri.go:89] found id: ""
	I1009 20:20:52.586478   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.586488   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:52.586495   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:52.586551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:52.620461   64287 cri.go:89] found id: ""
	I1009 20:20:52.620488   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.620499   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:52.620506   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:52.620566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:52.656032   64287 cri.go:89] found id: ""
	I1009 20:20:52.656063   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.656074   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:52.656082   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:52.656144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:52.687083   64287 cri.go:89] found id: ""
	I1009 20:20:52.687110   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.687118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:52.687124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:52.687187   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:52.723413   64287 cri.go:89] found id: ""
	I1009 20:20:52.723442   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.723453   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:52.723461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:52.723521   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:52.754656   64287 cri.go:89] found id: ""
	I1009 20:20:52.754687   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.754698   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:52.754709   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:52.754721   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:52.807359   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:52.807398   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:52.821469   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:52.821500   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:52.893447   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:52.893470   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:52.893484   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:52.970051   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:52.970083   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:53.880365   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.881762   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.898472   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:57.397863   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.580495   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:56.581092   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.508078   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:55.521951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:55.522012   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:55.556291   64287 cri.go:89] found id: ""
	I1009 20:20:55.556316   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.556324   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:55.556329   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:55.556380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:55.591032   64287 cri.go:89] found id: ""
	I1009 20:20:55.591059   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.591079   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:55.591086   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:55.591141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:55.636196   64287 cri.go:89] found id: ""
	I1009 20:20:55.636228   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.636239   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:55.636246   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:55.636310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:55.673291   64287 cri.go:89] found id: ""
	I1009 20:20:55.673313   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.673321   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:55.673327   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:55.673374   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:55.709457   64287 cri.go:89] found id: ""
	I1009 20:20:55.709486   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.709497   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:55.709504   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:55.709563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:55.748391   64287 cri.go:89] found id: ""
	I1009 20:20:55.748423   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.748434   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:55.748442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:55.748503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:55.780581   64287 cri.go:89] found id: ""
	I1009 20:20:55.780610   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.780620   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:55.780627   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:55.780688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:55.816489   64287 cri.go:89] found id: ""
	I1009 20:20:55.816527   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.816535   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:55.816554   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:55.816568   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:55.871679   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:55.871708   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:55.887895   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:55.887920   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:55.956814   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:55.956838   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:55.956850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:56.031453   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:56.031489   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.569098   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:58.583558   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:58.583626   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:58.622296   64287 cri.go:89] found id: ""
	I1009 20:20:58.622326   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.622334   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:58.622340   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:58.622401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:58.663776   64287 cri.go:89] found id: ""
	I1009 20:20:58.663798   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.663806   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:58.663812   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:58.663858   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:58.699968   64287 cri.go:89] found id: ""
	I1009 20:20:58.699994   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.700002   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:58.700007   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:58.700066   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:58.733935   64287 cri.go:89] found id: ""
	I1009 20:20:58.733959   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.733968   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:58.733974   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:58.734030   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:58.768723   64287 cri.go:89] found id: ""
	I1009 20:20:58.768752   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.768763   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:58.768771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:58.768834   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:58.803129   64287 cri.go:89] found id: ""
	I1009 20:20:58.803153   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.803161   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:58.803166   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:58.803237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:58.836341   64287 cri.go:89] found id: ""
	I1009 20:20:58.836366   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.836374   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:58.836379   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:58.836437   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:58.872048   64287 cri.go:89] found id: ""
	I1009 20:20:58.872071   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.872081   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:58.872091   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:58.872106   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:58.950133   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:58.950167   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.988529   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:58.988555   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:59.038377   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:59.038414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:59.053398   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:59.053448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:59.120793   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:58.380051   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:00.380182   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:59.398592   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.898382   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:58.581266   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.081525   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.621691   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:01.634505   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:01.634563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:01.670785   64287 cri.go:89] found id: ""
	I1009 20:21:01.670818   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.670826   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:01.670833   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:01.670897   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:01.712219   64287 cri.go:89] found id: ""
	I1009 20:21:01.712243   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.712255   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:01.712261   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:01.712307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:01.747175   64287 cri.go:89] found id: ""
	I1009 20:21:01.747204   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.747215   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:01.747222   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:01.747282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:01.785359   64287 cri.go:89] found id: ""
	I1009 20:21:01.785382   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.785389   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:01.785396   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:01.785452   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:01.822385   64287 cri.go:89] found id: ""
	I1009 20:21:01.822415   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.822426   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:01.822433   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:01.822501   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:01.860839   64287 cri.go:89] found id: ""
	I1009 20:21:01.860871   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.860880   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:01.860889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:01.860935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:01.899191   64287 cri.go:89] found id: ""
	I1009 20:21:01.899215   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.899224   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:01.899232   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:01.899288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:01.936692   64287 cri.go:89] found id: ""
	I1009 20:21:01.936721   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.936729   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:01.936737   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:01.936748   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:02.014848   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:02.014883   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:02.058815   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:02.058846   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:02.110513   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:02.110543   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:02.123855   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:02.123878   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:02.193997   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:02.880277   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.881247   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:07.380330   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.398320   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.580574   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.080382   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.081294   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.694766   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:04.707675   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:04.707743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:04.741322   64287 cri.go:89] found id: ""
	I1009 20:21:04.741354   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.741365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:04.741374   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:04.741435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:04.780649   64287 cri.go:89] found id: ""
	I1009 20:21:04.780676   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.780686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:04.780694   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:04.780749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:04.817514   64287 cri.go:89] found id: ""
	I1009 20:21:04.817545   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.817557   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:04.817564   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:04.817672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:04.850848   64287 cri.go:89] found id: ""
	I1009 20:21:04.850871   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.850878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:04.850885   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:04.850942   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:04.885390   64287 cri.go:89] found id: ""
	I1009 20:21:04.885426   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.885438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:04.885449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:04.885513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:04.920199   64287 cri.go:89] found id: ""
	I1009 20:21:04.920221   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.920229   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:04.920235   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:04.920307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:04.954597   64287 cri.go:89] found id: ""
	I1009 20:21:04.954619   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.954627   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:04.954634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:04.954693   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:04.988236   64287 cri.go:89] found id: ""
	I1009 20:21:04.988262   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.988270   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:04.988278   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:04.988289   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:05.039909   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:05.039939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:05.053556   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:05.053583   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:05.126596   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:05.126618   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:05.126628   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:05.202275   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:05.202309   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:07.740836   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:07.754095   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:07.754165   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:07.786584   64287 cri.go:89] found id: ""
	I1009 20:21:07.786613   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.786621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:07.786627   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:07.786672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:07.822365   64287 cri.go:89] found id: ""
	I1009 20:21:07.822388   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.822396   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:07.822410   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:07.822456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:07.858058   64287 cri.go:89] found id: ""
	I1009 20:21:07.858083   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.858093   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:07.858100   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:07.858156   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:07.894319   64287 cri.go:89] found id: ""
	I1009 20:21:07.894345   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.894352   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:07.894358   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:07.894422   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:07.928620   64287 cri.go:89] found id: ""
	I1009 20:21:07.928648   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.928659   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:07.928667   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:07.928724   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:07.964923   64287 cri.go:89] found id: ""
	I1009 20:21:07.964956   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.964967   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:07.964976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:07.965035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:07.998308   64287 cri.go:89] found id: ""
	I1009 20:21:07.998336   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.998347   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:07.998354   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:07.998402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:08.032021   64287 cri.go:89] found id: ""
	I1009 20:21:08.032047   64287 logs.go:282] 0 containers: []
	W1009 20:21:08.032059   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:08.032070   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:08.032084   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:08.103843   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:08.103867   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:08.103882   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:08.185476   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:08.185507   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:08.226967   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:08.226994   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:08.304852   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:08.304887   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:09.389127   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:11.880856   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.399153   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.399356   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:12.897624   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.581193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:13.082124   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.819345   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:10.832902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:10.832963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:10.873237   64287 cri.go:89] found id: ""
	I1009 20:21:10.873268   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.873279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:10.873286   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:10.873350   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:10.907296   64287 cri.go:89] found id: ""
	I1009 20:21:10.907316   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.907324   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:10.907329   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:10.907377   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:10.946428   64287 cri.go:89] found id: ""
	I1009 20:21:10.946469   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.946481   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:10.946487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:10.946540   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:10.982175   64287 cri.go:89] found id: ""
	I1009 20:21:10.982199   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.982207   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:10.982212   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:10.982259   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:11.016197   64287 cri.go:89] found id: ""
	I1009 20:21:11.016220   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.016243   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:11.016250   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:11.016318   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:11.055697   64287 cri.go:89] found id: ""
	I1009 20:21:11.055723   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.055732   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:11.055740   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:11.055806   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:11.093444   64287 cri.go:89] found id: ""
	I1009 20:21:11.093469   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.093480   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:11.093487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:11.093548   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:11.133224   64287 cri.go:89] found id: ""
	I1009 20:21:11.133252   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.133266   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:11.133276   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:11.133291   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:11.189020   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:11.189057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:11.202652   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:11.202682   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:11.272789   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:11.272811   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:11.272824   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:11.354868   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:11.354904   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:13.896655   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:13.910126   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:13.910189   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:13.944472   64287 cri.go:89] found id: ""
	I1009 20:21:13.944497   64287 logs.go:282] 0 containers: []
	W1009 20:21:13.944505   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:13.944511   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:13.944566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:14.003362   64287 cri.go:89] found id: ""
	I1009 20:21:14.003387   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.003397   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:14.003407   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:14.003470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:14.037691   64287 cri.go:89] found id: ""
	I1009 20:21:14.037717   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.037726   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:14.037732   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:14.037792   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:14.079333   64287 cri.go:89] found id: ""
	I1009 20:21:14.079358   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.079368   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:14.079375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:14.079433   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:14.120821   64287 cri.go:89] found id: ""
	I1009 20:21:14.120843   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.120851   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:14.120857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:14.120904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:14.161089   64287 cri.go:89] found id: ""
	I1009 20:21:14.161118   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.161128   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:14.161135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:14.161193   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:14.201711   64287 cri.go:89] found id: ""
	I1009 20:21:14.201739   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.201748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:14.201756   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:14.201814   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:14.238469   64287 cri.go:89] found id: ""
	I1009 20:21:14.238502   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.238512   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:14.238520   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:14.238531   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:14.289786   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:14.289821   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:14.303876   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:14.303903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:14.376426   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:14.376446   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:14.376459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:14.458058   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:14.458095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:14.381278   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:16.381782   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:14.899834   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.398309   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:15.580946   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.574819   63744 pod_ready.go:82] duration metric: took 4m0.000292386s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:17.574851   63744 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:17.574882   63744 pod_ready.go:39] duration metric: took 4m14.424118915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:17.574914   63744 kubeadm.go:597] duration metric: took 4m22.465328757s to restartPrimaryControlPlane
	W1009 20:21:17.574982   63744 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:17.575016   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:17.000623   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:17.015890   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:17.015963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:17.054136   64287 cri.go:89] found id: ""
	I1009 20:21:17.054166   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.054177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:17.054185   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:17.054242   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:17.089501   64287 cri.go:89] found id: ""
	I1009 20:21:17.089538   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.089548   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:17.089556   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:17.089614   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:17.128042   64287 cri.go:89] found id: ""
	I1009 20:21:17.128066   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.128073   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:17.128079   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:17.128126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:17.164663   64287 cri.go:89] found id: ""
	I1009 20:21:17.164689   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.164697   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:17.164703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:17.164766   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:17.200865   64287 cri.go:89] found id: ""
	I1009 20:21:17.200891   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.200899   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:17.200906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:17.200963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:17.241649   64287 cri.go:89] found id: ""
	I1009 20:21:17.241675   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.241683   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:17.241690   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:17.241749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:17.277390   64287 cri.go:89] found id: ""
	I1009 20:21:17.277424   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.277436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:17.277449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:17.277515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:17.316942   64287 cri.go:89] found id: ""
	I1009 20:21:17.316973   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.316985   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:17.316995   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:17.317015   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:17.360293   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:17.360322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:17.413510   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:17.413546   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:17.427280   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:17.427310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:17.509531   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:17.509551   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:17.509566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:18.880550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.881023   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:19.398723   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:21.899259   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.092463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:20.106101   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:20.106168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:20.147889   64287 cri.go:89] found id: ""
	I1009 20:21:20.147916   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.147925   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:20.147931   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:20.147980   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:20.183097   64287 cri.go:89] found id: ""
	I1009 20:21:20.183167   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.183179   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:20.183185   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:20.183233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:20.217556   64287 cri.go:89] found id: ""
	I1009 20:21:20.217585   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.217596   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:20.217604   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:20.217661   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:20.256692   64287 cri.go:89] found id: ""
	I1009 20:21:20.256717   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.256728   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:20.256735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:20.256797   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:20.290866   64287 cri.go:89] found id: ""
	I1009 20:21:20.290888   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.290896   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:20.290902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:20.290954   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:20.326802   64287 cri.go:89] found id: ""
	I1009 20:21:20.326828   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.326836   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:20.326842   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:20.326901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:20.362395   64287 cri.go:89] found id: ""
	I1009 20:21:20.362426   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.362436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:20.362442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:20.362504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:20.408354   64287 cri.go:89] found id: ""
	I1009 20:21:20.408381   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.408391   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:20.408400   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:20.408415   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:20.426669   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:20.426694   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:20.525895   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:20.525927   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:20.525939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:20.612620   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:20.612654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:20.653152   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:20.653179   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.205516   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:23.218432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:23.218493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:23.254327   64287 cri.go:89] found id: ""
	I1009 20:21:23.254355   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.254365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:23.254372   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:23.254429   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:23.295411   64287 cri.go:89] found id: ""
	I1009 20:21:23.295437   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.295448   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:23.295463   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:23.295523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:23.331631   64287 cri.go:89] found id: ""
	I1009 20:21:23.331661   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.331672   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:23.331679   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:23.331742   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:23.366114   64287 cri.go:89] found id: ""
	I1009 20:21:23.366139   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.366147   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:23.366152   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:23.366200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:23.403549   64287 cri.go:89] found id: ""
	I1009 20:21:23.403580   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.403587   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:23.403593   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:23.403652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:23.439231   64287 cri.go:89] found id: ""
	I1009 20:21:23.439254   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.439263   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:23.439268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:23.439322   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:23.473417   64287 cri.go:89] found id: ""
	I1009 20:21:23.473441   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.473449   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:23.473455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:23.473503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:23.506129   64287 cri.go:89] found id: ""
	I1009 20:21:23.506151   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.506159   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:23.506166   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:23.506176   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:23.546813   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:23.546836   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.599317   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:23.599346   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:23.612400   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:23.612426   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:23.684905   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:23.684924   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:23.684936   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:22.881084   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:25.380780   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:27.380875   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:23.899699   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.401044   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.267079   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:26.282873   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:26.282946   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:26.319632   64287 cri.go:89] found id: ""
	I1009 20:21:26.319657   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.319665   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:26.319671   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:26.319716   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:26.362263   64287 cri.go:89] found id: ""
	I1009 20:21:26.362290   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.362299   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:26.362306   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:26.362401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:26.412274   64287 cri.go:89] found id: ""
	I1009 20:21:26.412309   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.412320   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:26.412332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:26.412391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:26.446754   64287 cri.go:89] found id: ""
	I1009 20:21:26.446774   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.446783   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:26.446788   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:26.446838   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:26.480333   64287 cri.go:89] found id: ""
	I1009 20:21:26.480359   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.480367   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:26.480375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:26.480438   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:26.518440   64287 cri.go:89] found id: ""
	I1009 20:21:26.518469   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.518479   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:26.518486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:26.518555   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:26.555100   64287 cri.go:89] found id: ""
	I1009 20:21:26.555127   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.555138   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:26.555146   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:26.555208   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:26.594515   64287 cri.go:89] found id: ""
	I1009 20:21:26.594538   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.594550   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:26.594559   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:26.594573   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:26.647465   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:26.647511   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:26.661021   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:26.661042   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:26.732233   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:26.732265   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:26.732286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:26.813104   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:26.813143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:29.361485   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:29.374578   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:29.374647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:29.409740   64287 cri.go:89] found id: ""
	I1009 20:21:29.409766   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.409774   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:29.409781   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:29.409826   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:29.443932   64287 cri.go:89] found id: ""
	I1009 20:21:29.443959   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.443970   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:29.443978   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:29.444070   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:29.485900   64287 cri.go:89] found id: ""
	I1009 20:21:29.485927   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.485935   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:29.485940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:29.485994   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:29.527976   64287 cri.go:89] found id: ""
	I1009 20:21:29.528002   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.528013   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:29.528021   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:29.528080   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:29.572186   64287 cri.go:89] found id: ""
	I1009 20:21:29.572214   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.572235   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:29.572243   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:29.572310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:29.612166   64287 cri.go:89] found id: ""
	I1009 20:21:29.612190   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.612200   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:29.612208   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:29.612267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:29.880828   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:32.380494   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:28.897535   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:31.398369   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:29.646269   64287 cri.go:89] found id: ""
	I1009 20:21:29.646294   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.646312   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:29.646319   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:29.646375   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:29.680624   64287 cri.go:89] found id: ""
	I1009 20:21:29.680649   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.680656   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:29.680663   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:29.680673   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:29.729251   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:29.729278   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:29.742746   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:29.742773   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:29.815128   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:29.815150   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:29.815164   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:29.893418   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:29.893448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.433532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:32.447090   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:32.447161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:32.482662   64287 cri.go:89] found id: ""
	I1009 20:21:32.482688   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.482696   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:32.482702   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:32.482755   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:32.521292   64287 cri.go:89] found id: ""
	I1009 20:21:32.521321   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.521329   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:32.521337   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:32.521393   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:32.555868   64287 cri.go:89] found id: ""
	I1009 20:21:32.555894   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.555901   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:32.555906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:32.555956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:32.593541   64287 cri.go:89] found id: ""
	I1009 20:21:32.593563   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.593570   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:32.593575   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:32.593632   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:32.627712   64287 cri.go:89] found id: ""
	I1009 20:21:32.627740   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.627751   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:32.627758   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:32.627816   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:32.660632   64287 cri.go:89] found id: ""
	I1009 20:21:32.660658   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.660669   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:32.660677   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:32.660733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:32.697709   64287 cri.go:89] found id: ""
	I1009 20:21:32.697737   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.697748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:32.697755   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:32.697810   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:32.734782   64287 cri.go:89] found id: ""
	I1009 20:21:32.734806   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.734816   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:32.734827   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:32.734840   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:32.809239   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:32.809271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.857109   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:32.857143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:32.915156   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:32.915185   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:32.929782   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:32.929813   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:32.996321   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:34.380798   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:36.880717   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:33.399188   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.899631   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.497013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:35.510645   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:35.510714   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:35.543840   64287 cri.go:89] found id: ""
	I1009 20:21:35.543869   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.543878   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:35.543883   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:35.543929   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:35.579206   64287 cri.go:89] found id: ""
	I1009 20:21:35.579235   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.579246   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:35.579254   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:35.579312   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:35.613362   64287 cri.go:89] found id: ""
	I1009 20:21:35.613393   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.613406   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:35.613414   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:35.613484   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:35.649553   64287 cri.go:89] found id: ""
	I1009 20:21:35.649584   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.649596   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:35.649605   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:35.649672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:35.688665   64287 cri.go:89] found id: ""
	I1009 20:21:35.688695   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.688706   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:35.688714   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:35.688771   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:35.725958   64287 cri.go:89] found id: ""
	I1009 20:21:35.725979   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.725987   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:35.725993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:35.726047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:35.758368   64287 cri.go:89] found id: ""
	I1009 20:21:35.758395   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.758405   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:35.758410   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:35.758455   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:35.790323   64287 cri.go:89] found id: ""
	I1009 20:21:35.790347   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.790357   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:35.790367   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:35.790380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:35.843721   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:35.843752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:35.858894   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:35.858915   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:35.934242   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:35.934261   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:35.934273   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:36.016029   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:36.016062   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.554219   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:38.567266   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:38.567339   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:38.606292   64287 cri.go:89] found id: ""
	I1009 20:21:38.606328   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.606338   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:38.606344   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:38.606396   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:38.638807   64287 cri.go:89] found id: ""
	I1009 20:21:38.638831   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.638841   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:38.638849   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:38.638907   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:38.677635   64287 cri.go:89] found id: ""
	I1009 20:21:38.677665   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.677674   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:38.677682   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:38.677740   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:38.714847   64287 cri.go:89] found id: ""
	I1009 20:21:38.714870   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.714878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:38.714886   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:38.714944   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:38.746460   64287 cri.go:89] found id: ""
	I1009 20:21:38.746487   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.746495   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:38.746501   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:38.746554   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:38.782027   64287 cri.go:89] found id: ""
	I1009 20:21:38.782055   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.782066   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:38.782073   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:38.782130   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:38.816859   64287 cri.go:89] found id: ""
	I1009 20:21:38.816885   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.816893   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:38.816899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:38.816961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:38.857159   64287 cri.go:89] found id: ""
	I1009 20:21:38.857195   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.857204   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:38.857212   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:38.857224   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:38.913209   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:38.913240   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:38.927593   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:38.927617   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:38.998178   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:38.998213   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:38.998226   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:39.080681   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:39.080716   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.882054   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.874981   64109 pod_ready.go:82] duration metric: took 4m0.000684397s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:40.875008   64109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:40.875024   64109 pod_ready.go:39] duration metric: took 4m13.532570346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:40.875056   64109 kubeadm.go:597] duration metric: took 4m22.188345085s to restartPrimaryControlPlane
	W1009 20:21:40.875130   64109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:40.875162   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:38.397606   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.398216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:42.398390   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:41.620092   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:41.633491   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:41.633564   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:41.671087   64287 cri.go:89] found id: ""
	I1009 20:21:41.671114   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.671123   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:41.671128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:41.671184   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:41.706940   64287 cri.go:89] found id: ""
	I1009 20:21:41.706966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.706976   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:41.706984   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:41.707036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:41.745612   64287 cri.go:89] found id: ""
	I1009 20:21:41.745637   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.745646   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:41.745651   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:41.745706   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:41.786857   64287 cri.go:89] found id: ""
	I1009 20:21:41.786884   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.786895   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:41.786904   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:41.786958   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:41.825005   64287 cri.go:89] found id: ""
	I1009 20:21:41.825030   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.825041   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:41.825053   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:41.825100   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:41.863089   64287 cri.go:89] found id: ""
	I1009 20:21:41.863111   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.863118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:41.863124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:41.863169   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:41.907937   64287 cri.go:89] found id: ""
	I1009 20:21:41.907966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.907980   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:41.907988   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:41.908047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:41.948189   64287 cri.go:89] found id: ""
	I1009 20:21:41.948219   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.948229   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:41.948243   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:41.948257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:41.993008   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:41.993038   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:42.045831   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:42.045864   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:42.060255   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:42.060280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:42.127657   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:42.127680   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:42.127696   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:44.398696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:46.399642   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:43.855161   63744 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.280119061s)
	I1009 20:21:43.855245   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:43.871587   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:43.881677   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:43.891625   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:43.891646   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:43.891689   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:43.901651   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:43.901705   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:43.911179   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:43.920389   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:43.920436   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:43.929812   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.938937   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:43.938989   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.948454   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:43.958881   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:43.958924   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:43.970036   63744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:44.024453   63744 kubeadm.go:310] W1009 20:21:44.000704    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.025829   63744 kubeadm.go:310] W1009 20:21:44.002227    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.142191   63744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:44.713209   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:44.725754   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:44.725825   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:44.760976   64287 cri.go:89] found id: ""
	I1009 20:21:44.760997   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.761004   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:44.761011   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:44.761053   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:44.796955   64287 cri.go:89] found id: ""
	I1009 20:21:44.796977   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.796985   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:44.796991   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:44.797036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:44.832558   64287 cri.go:89] found id: ""
	I1009 20:21:44.832590   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.832601   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:44.832608   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:44.832667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:44.867869   64287 cri.go:89] found id: ""
	I1009 20:21:44.867898   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.867908   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:44.867916   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:44.867966   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:44.901395   64287 cri.go:89] found id: ""
	I1009 20:21:44.901423   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.901434   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:44.901442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:44.901505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:44.939276   64287 cri.go:89] found id: ""
	I1009 20:21:44.939310   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.939323   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:44.939337   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:44.939399   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:44.973692   64287 cri.go:89] found id: ""
	I1009 20:21:44.973719   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.973728   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:44.973734   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:44.973782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:45.007406   64287 cri.go:89] found id: ""
	I1009 20:21:45.007436   64287 logs.go:282] 0 containers: []
	W1009 20:21:45.007446   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:45.007457   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:45.007472   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:45.062199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:45.062233   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:45.075739   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:45.075763   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:45.147623   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:45.147639   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:45.147654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:45.229252   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:45.229286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:47.777208   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:47.794054   64287 kubeadm.go:597] duration metric: took 4m2.743382732s to restartPrimaryControlPlane
	W1009 20:21:47.794132   64287 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:47.794159   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:48.789863   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:48.804981   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:48.815981   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:48.826318   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:48.826340   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:48.826390   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:48.838918   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:48.838976   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:48.851635   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:48.864173   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:48.864237   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:48.874606   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.885036   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:48.885097   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.894870   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:48.904993   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:48.905040   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:48.915393   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:49.145081   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:52.033314   63744 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:21:52.033383   63744 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:21:52.033489   63744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:21:52.033625   63744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:21:52.033705   63744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:21:52.033799   63744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:21:52.035555   63744 out.go:235]   - Generating certificates and keys ...
	I1009 20:21:52.035638   63744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:21:52.035737   63744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:21:52.035861   63744 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:21:52.035951   63744 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:21:52.036043   63744 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:21:52.036135   63744 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:21:52.036233   63744 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:21:52.036325   63744 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:21:52.036431   63744 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:21:52.036584   63744 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:21:52.036656   63744 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:21:52.036737   63744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:21:52.036831   63744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:21:52.036914   63744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:21:52.036985   63744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:21:52.037077   63744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:21:52.037157   63744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:21:52.037280   63744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:21:52.037372   63744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:21:52.038777   63744 out.go:235]   - Booting up control plane ...
	I1009 20:21:52.038872   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:21:52.038995   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:21:52.039101   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:21:52.039242   63744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:21:52.039338   63744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:21:52.039393   63744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:21:52.039593   63744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:21:52.039746   63744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:21:52.039813   63744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005827851s
	I1009 20:21:52.039917   63744 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:21:52.039996   63744 kubeadm.go:310] [api-check] The API server is healthy after 4.502512954s
	I1009 20:21:52.040127   63744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:21:52.040319   63744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:21:52.040402   63744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:21:52.040606   63744 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-503330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:21:52.040684   63744 kubeadm.go:310] [bootstrap-token] Using token: 69fwjj.t1glswhsta5w4zx2
	I1009 20:21:52.042352   63744 out.go:235]   - Configuring RBAC rules ...
	I1009 20:21:52.042456   63744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:21:52.042526   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:21:52.042664   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:21:52.042773   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:21:52.042868   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:21:52.042948   63744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:21:52.043119   63744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:21:52.043184   63744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:21:52.043250   63744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:21:52.043258   63744 kubeadm.go:310] 
	I1009 20:21:52.043360   63744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:21:52.043377   63744 kubeadm.go:310] 
	I1009 20:21:52.043504   63744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:21:52.043516   63744 kubeadm.go:310] 
	I1009 20:21:52.043554   63744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:21:52.043639   63744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:21:52.043711   63744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:21:52.043721   63744 kubeadm.go:310] 
	I1009 20:21:52.043792   63744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:21:52.043800   63744 kubeadm.go:310] 
	I1009 20:21:52.043838   63744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:21:52.043844   63744 kubeadm.go:310] 
	I1009 20:21:52.043909   63744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:21:52.044021   63744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:21:52.044108   63744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:21:52.044117   63744 kubeadm.go:310] 
	I1009 20:21:52.044225   63744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:21:52.044350   63744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:21:52.044365   63744 kubeadm.go:310] 
	I1009 20:21:52.044462   63744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044591   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:21:52.044619   63744 kubeadm.go:310] 	--control-plane 
	I1009 20:21:52.044624   63744 kubeadm.go:310] 
	I1009 20:21:52.044732   63744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:21:52.044739   63744 kubeadm.go:310] 
	I1009 20:21:52.044842   63744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044956   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:21:52.044967   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:21:52.044973   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:21:52.047342   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:21:48.899752   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:51.398734   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:52.048508   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:21:52.060338   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:21:52.079526   63744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:21:52.079580   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.079669   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-503330 minikube.k8s.io/updated_at=2024_10_09T20_21_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=embed-certs-503330 minikube.k8s.io/primary=true
	I1009 20:21:52.296281   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.296296   63744 ops.go:34] apiserver oom_adj: -16
	I1009 20:21:52.796429   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.296570   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.797269   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.297261   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.797049   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.297194   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.796896   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.296658   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.796494   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.904248   63744 kubeadm.go:1113] duration metric: took 4.824720684s to wait for elevateKubeSystemPrivileges
	I1009 20:21:56.904284   63744 kubeadm.go:394] duration metric: took 5m1.847540023s to StartCluster
	I1009 20:21:56.904302   63744 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.904390   63744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:21:56.906918   63744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.907263   63744 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:56.907349   63744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:56.907451   63744 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-503330"
	I1009 20:21:56.907487   63744 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-503330"
	I1009 20:21:56.907486   63744 addons.go:69] Setting default-storageclass=true in profile "embed-certs-503330"
	W1009 20:21:56.907496   63744 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:21:56.907502   63744 addons.go:69] Setting metrics-server=true in profile "embed-certs-503330"
	I1009 20:21:56.907527   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907540   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:21:56.907529   63744 addons.go:234] Setting addon metrics-server=true in "embed-certs-503330"
	W1009 20:21:56.907616   63744 addons.go:243] addon metrics-server should already be in state true
	I1009 20:21:56.907642   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907508   63744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-503330"
	I1009 20:21:56.907976   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908018   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908038   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908061   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908072   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908105   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.909166   63744 out.go:177] * Verifying Kubernetes components...
	I1009 20:21:56.910945   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:56.924607   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1009 20:21:56.925089   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.925624   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.925643   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.926009   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.926194   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.927999   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1009 20:21:56.928182   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1009 20:21:56.929496   63744 addons.go:234] Setting addon default-storageclass=true in "embed-certs-503330"
	W1009 20:21:56.929513   63744 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:21:56.929533   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.929779   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.929804   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.930111   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930148   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930590   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930607   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930727   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930742   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930950   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931022   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931541   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.931583   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.932246   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.932292   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.945160   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I1009 20:21:56.945657   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.946102   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.946128   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.946469   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.947002   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.947044   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.951951   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I1009 20:21:56.952409   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.952851   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1009 20:21:56.953051   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953068   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.953331   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.953407   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.953561   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.953830   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953854   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.954204   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.954381   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.956314   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.956515   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.958947   63744 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:21:56.959026   63744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:53.898455   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:55.898680   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:57.899675   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:56.961002   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:21:56.961019   63744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:21:56.961036   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.961188   63744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.961206   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:56.961219   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.964087   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1009 20:21:56.964490   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.964644   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965040   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965298   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965511   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965539   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965577   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965600   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965876   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.965901   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.965901   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965958   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966041   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966083   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.966324   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.967052   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.967288   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.968690   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.968865   63744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.968880   63744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:56.968902   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.971293   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971661   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.971682   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971807   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.971975   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.972115   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.972249   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:57.140847   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:57.160702   63744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172751   63744 node_ready.go:49] node "embed-certs-503330" has status "Ready":"True"
	I1009 20:21:57.172781   63744 node_ready.go:38] duration metric: took 12.05112ms for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172794   63744 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:57.181089   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:21:57.242001   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:57.263153   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:21:57.263173   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:21:57.302934   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:21:57.302962   63744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:21:57.335796   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.335822   63744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:21:57.361537   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.418449   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:57.903919   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.903945   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904232   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904252   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:57.904261   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.904269   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904289   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:57.904560   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904578   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131399   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131433   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131434   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131451   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131717   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131742   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131750   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131762   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131792   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131796   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131847   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131861   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131869   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131972   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131986   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133342   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.133353   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.133363   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133372   63744 addons.go:475] Verifying addon metrics-server=true in "embed-certs-503330"
	I1009 20:21:58.148066   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.148090   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.148302   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.148304   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.148331   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.149874   63744 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1009 20:21:58.151249   63744 addons.go:510] duration metric: took 1.243909023s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1009 20:22:00.398702   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:02.898157   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:59.187137   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:01.686294   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:03.687302   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:04.187813   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:04.187838   63744 pod_ready.go:82] duration metric: took 7.006724226s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:04.187847   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693964   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.693989   63744 pod_ready.go:82] duration metric: took 1.506136012s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693999   63744 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698244   63744 pod_ready.go:93] pod "etcd-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.698263   63744 pod_ready.go:82] duration metric: took 4.258915ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698272   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702503   63744 pod_ready.go:93] pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.702523   63744 pod_ready.go:82] duration metric: took 4.24469ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702534   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706794   63744 pod_ready.go:93] pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.706814   63744 pod_ready.go:82] duration metric: took 4.272023ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706824   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785041   63744 pod_ready.go:93] pod "kube-proxy-k4sqz" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.785063   63744 pod_ready.go:82] duration metric: took 78.232276ms for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785072   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185082   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:06.185107   63744 pod_ready.go:82] duration metric: took 400.026614ms for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185118   63744 pod_ready.go:39] duration metric: took 9.012311475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:06.185134   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:06.185190   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:06.200274   63744 api_server.go:72] duration metric: took 9.292974134s to wait for apiserver process to appear ...
	I1009 20:22:06.200300   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:06.200319   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:22:06.204606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:22:06.205489   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:06.205507   63744 api_server.go:131] duration metric: took 5.200899ms to wait for apiserver health ...
	I1009 20:22:06.205515   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:06.387526   63744 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:06.387560   63744 system_pods.go:61] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.387566   63744 system_pods.go:61] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.387569   63744 system_pods.go:61] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.387572   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.387576   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.387580   63744 system_pods.go:61] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.387584   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.387589   63744 system_pods.go:61] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.387595   63744 system_pods.go:61] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.387604   63744 system_pods.go:74] duration metric: took 182.083801ms to wait for pod list to return data ...
	I1009 20:22:06.387614   63744 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:06.585261   63744 default_sa.go:45] found service account: "default"
	I1009 20:22:06.585283   63744 default_sa.go:55] duration metric: took 197.662514ms for default service account to be created ...
	I1009 20:22:06.585292   63744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:06.788380   63744 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:06.788405   63744 system_pods.go:89] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.788410   63744 system_pods.go:89] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.788414   63744 system_pods.go:89] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.788418   63744 system_pods.go:89] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.788421   63744 system_pods.go:89] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.788425   63744 system_pods.go:89] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.788428   63744 system_pods.go:89] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.788433   63744 system_pods.go:89] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.788437   63744 system_pods.go:89] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.788445   63744 system_pods.go:126] duration metric: took 203.147541ms to wait for k8s-apps to be running ...
	I1009 20:22:06.788454   63744 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:06.788493   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:06.808681   63744 system_svc.go:56] duration metric: took 20.217422ms WaitForService to wait for kubelet
	I1009 20:22:06.808710   63744 kubeadm.go:582] duration metric: took 9.901411942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:06.808733   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:06.984902   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:06.984932   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:06.984945   63744 node_conditions.go:105] duration metric: took 176.206313ms to run NodePressure ...
	I1009 20:22:06.984958   63744 start.go:241] waiting for startup goroutines ...
	I1009 20:22:06.984968   63744 start.go:246] waiting for cluster config update ...
	I1009 20:22:06.984981   63744 start.go:255] writing updated cluster config ...
	I1009 20:22:06.985286   63744 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:07.038935   63744 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:07.040555   63744 out.go:177] * Done! kubectl is now configured to use "embed-certs-503330" cluster and "default" namespace by default
	I1009 20:22:07.095426   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.220236459s)
	I1009 20:22:07.095500   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:07.112458   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:22:07.126942   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:22:07.140284   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:22:07.140304   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:22:07.140349   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:22:07.150051   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:22:07.150089   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:22:07.159508   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:22:07.169670   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:22:07.169724   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:22:07.179378   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.189534   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:22:07.189590   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.198752   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:22:07.207878   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:22:07.207922   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:22:07.217131   64109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:22:07.272837   64109 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:22:07.272983   64109 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:22:07.390966   64109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:22:07.391157   64109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:22:07.391298   64109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:22:07.402064   64109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:22:07.404170   64109 out.go:235]   - Generating certificates and keys ...
	I1009 20:22:07.404277   64109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:22:07.404377   64109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:22:07.404500   64109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:22:07.404594   64109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:22:07.404709   64109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:22:07.404798   64109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:22:07.404891   64109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:22:07.404980   64109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:22:07.405087   64109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:22:07.405184   64109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:22:07.405257   64109 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:22:07.405339   64109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:22:04.898623   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:06.899217   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:07.573252   64109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:22:07.929073   64109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:22:08.151802   64109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:22:08.220927   64109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:22:08.351546   64109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:22:08.352048   64109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:22:08.354486   64109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:22:08.356298   64109 out.go:235]   - Booting up control plane ...
	I1009 20:22:08.356416   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:22:08.356497   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:22:08.356564   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:22:08.376381   64109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:22:08.383479   64109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:22:08.383861   64109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:22:08.515158   64109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:22:08.515282   64109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:22:09.516371   64109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001248976s
	I1009 20:22:09.516460   64109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:22:09.398667   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:11.898547   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:14.518560   64109 kubeadm.go:310] [api-check] The API server is healthy after 5.002267352s
	I1009 20:22:14.535812   64109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:22:14.551918   64109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:22:14.575035   64109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:22:14.575281   64109 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-733270 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:22:14.589604   64109 kubeadm.go:310] [bootstrap-token] Using token: q60nq5.9zsgiaeid5aito18
	I1009 20:22:14.590971   64109 out.go:235]   - Configuring RBAC rules ...
	I1009 20:22:14.591128   64109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:22:14.597327   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:22:14.605584   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:22:14.608650   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:22:14.614771   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:22:14.618089   64109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:22:14.929271   64109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:22:15.378546   64109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:22:15.929242   64109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:22:15.930222   64109 kubeadm.go:310] 
	I1009 20:22:15.930305   64109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:22:15.930314   64109 kubeadm.go:310] 
	I1009 20:22:15.930395   64109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:22:15.930423   64109 kubeadm.go:310] 
	I1009 20:22:15.930468   64109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:22:15.930569   64109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:22:15.930635   64109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:22:15.930643   64109 kubeadm.go:310] 
	I1009 20:22:15.930711   64109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:22:15.930718   64109 kubeadm.go:310] 
	I1009 20:22:15.930758   64109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:22:15.930764   64109 kubeadm.go:310] 
	I1009 20:22:15.930807   64109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:22:15.930874   64109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:22:15.930933   64109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:22:15.930939   64109 kubeadm.go:310] 
	I1009 20:22:15.931013   64109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:22:15.931138   64109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:22:15.931150   64109 kubeadm.go:310] 
	I1009 20:22:15.931258   64109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931411   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:22:15.931450   64109 kubeadm.go:310] 	--control-plane 
	I1009 20:22:15.931460   64109 kubeadm.go:310] 
	I1009 20:22:15.931560   64109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:22:15.931569   64109 kubeadm.go:310] 
	I1009 20:22:15.931668   64109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931824   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:22:15.933191   64109 kubeadm.go:310] W1009 20:22:07.220393    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933602   64109 kubeadm.go:310] W1009 20:22:07.223065    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933757   64109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:22:15.933786   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:22:15.933800   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:22:15.935449   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:22:15.936759   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:22:15.947648   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:22:15.966343   64109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:22:15.966422   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:15.966483   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-733270 minikube.k8s.io/updated_at=2024_10_09T20_22_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=default-k8s-diff-port-733270 minikube.k8s.io/primary=true
	I1009 20:22:16.186232   64109 ops.go:34] apiserver oom_adj: -16
	I1009 20:22:16.186379   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:16.686824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:17.187316   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:14.398119   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:16.399791   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:17.687381   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.186824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.687500   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.187331   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.687194   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.767575   64109 kubeadm.go:1113] duration metric: took 3.801217416s to wait for elevateKubeSystemPrivileges
	I1009 20:22:19.767611   64109 kubeadm.go:394] duration metric: took 5m1.132732036s to StartCluster
	I1009 20:22:19.767631   64109 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.767719   64109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:22:19.769461   64109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.769695   64109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:22:19.769758   64109 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:22:19.769856   64109 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769884   64109 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-733270"
	I1009 20:22:19.769881   64109 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769894   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:22:19.769908   64109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733270"
	W1009 20:22:19.769897   64109 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:22:19.769970   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.769892   64109 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.770056   64109 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.770069   64109 addons.go:243] addon metrics-server should already be in state true
	I1009 20:22:19.770116   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.770324   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770356   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770364   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770392   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770486   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770522   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.771540   64109 out.go:177] * Verifying Kubernetes components...
	I1009 20:22:19.772979   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:22:19.785692   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I1009 20:22:19.785792   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I1009 20:22:19.786095   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786204   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786608   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786629   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786759   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786776   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786948   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.787422   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.787449   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.787843   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.788015   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.788974   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34217
	I1009 20:22:19.789282   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.789751   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.789772   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.791379   64109 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.791400   64109 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:22:19.791428   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.791601   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.791796   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.791834   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.792113   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.792147   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.806661   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1009 20:22:19.807178   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1009 20:22:19.807283   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807700   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807966   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.807989   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808200   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.808223   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808407   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.808629   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808811   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.810504   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810671   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1009 20:22:19.811047   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.811579   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.811602   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.811962   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.812375   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.812404   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.812666   64109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:22:19.812673   64109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:22:19.814145   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:22:19.814160   64109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:22:19.814173   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.814293   64109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:19.814308   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:22:19.814324   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.817244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818718   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.818744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818881   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.818956   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819037   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819240   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.819401   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.819677   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.819697   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.819713   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819831   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819990   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.820176   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.831920   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1009 20:22:19.832278   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.832725   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.832757   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.833093   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.833271   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.834841   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.835042   64109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:19.835074   64109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:22:19.835094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.837916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.838651   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838759   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.838927   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.839075   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.839216   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.968622   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:22:19.988987   64109 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005886   64109 node_ready.go:49] node "default-k8s-diff-port-733270" has status "Ready":"True"
	I1009 20:22:20.005909   64109 node_ready.go:38] duration metric: took 16.891882ms for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005920   64109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:20.015076   64109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:20.072480   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:22:20.072517   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:22:20.089167   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:20.101256   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:20.128261   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:22:20.128310   64109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:22:20.166749   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.166772   64109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:22:20.250822   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.802064   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802142   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802449   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802462   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802465   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802471   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802479   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802482   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802490   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802503   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.804339   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804345   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804381   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.804403   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804413   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804426   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.820127   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.820148   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.820509   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.820526   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.820558   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.348946   64109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.098079149s)
	I1009 20:22:21.349009   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349024   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349347   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349396   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349404   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349420   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349428   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349689   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349748   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349774   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349788   64109 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-733270"
	I1009 20:22:21.351765   64109 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1009 20:22:21.352876   64109 addons.go:510] duration metric: took 1.58312679s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1009 20:22:22.021876   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:18.401861   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:20.899295   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:24.521853   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.021730   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:23.399283   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:25.897649   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.897899   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:28.021952   64109 pod_ready.go:93] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.021974   64109 pod_ready.go:82] duration metric: took 8.006873591s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.021983   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026148   64109 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.026167   64109 pod_ready.go:82] duration metric: took 4.178272ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026176   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029955   64109 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.029976   64109 pod_ready.go:82] duration metric: took 3.792606ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029986   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033674   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.033690   64109 pod_ready.go:82] duration metric: took 3.698391ms for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033697   64109 pod_ready.go:39] duration metric: took 8.027766695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:28.033709   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:28.033754   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:28.057802   64109 api_server.go:72] duration metric: took 8.288077751s to wait for apiserver process to appear ...
	I1009 20:22:28.057830   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:28.057850   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:22:28.069876   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:22:28.071652   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:28.071676   64109 api_server.go:131] duration metric: took 13.838153ms to wait for apiserver health ...
	I1009 20:22:28.071684   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:28.083482   64109 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:28.083504   64109 system_pods.go:61] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.083509   64109 system_pods.go:61] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.083513   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.083516   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.083520   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.083523   64109 system_pods.go:61] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.083526   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.083531   64109 system_pods.go:61] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.083535   64109 system_pods.go:61] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.083542   64109 system_pods.go:74] duration metric: took 11.853134ms to wait for pod list to return data ...
	I1009 20:22:28.083548   64109 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:28.086146   64109 default_sa.go:45] found service account: "default"
	I1009 20:22:28.086165   64109 default_sa.go:55] duration metric: took 2.611433ms for default service account to be created ...
	I1009 20:22:28.086173   64109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:28.223233   64109 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:28.223260   64109 system_pods.go:89] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.223266   64109 system_pods.go:89] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.223270   64109 system_pods.go:89] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.223274   64109 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.223278   64109 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.223281   64109 system_pods.go:89] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.223285   64109 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.223291   64109 system_pods.go:89] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.223295   64109 system_pods.go:89] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.223303   64109 system_pods.go:126] duration metric: took 137.124429ms to wait for k8s-apps to be running ...
	I1009 20:22:28.223310   64109 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:28.223352   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:28.239300   64109 system_svc.go:56] duration metric: took 15.983195ms WaitForService to wait for kubelet
	I1009 20:22:28.239324   64109 kubeadm.go:582] duration metric: took 8.469605426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:28.239341   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:28.419917   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:28.419940   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:28.419951   64109 node_conditions.go:105] duration metric: took 180.606696ms to run NodePressure ...
	I1009 20:22:28.419962   64109 start.go:241] waiting for startup goroutines ...
	I1009 20:22:28.419969   64109 start.go:246] waiting for cluster config update ...
	I1009 20:22:28.419978   64109 start.go:255] writing updated cluster config ...
	I1009 20:22:28.420224   64109 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:28.467253   64109 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:28.469239   64109 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-733270" cluster and "default" namespace by default
	I1009 20:22:29.898528   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:31.897863   63427 pod_ready.go:82] duration metric: took 4m0.005763954s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:22:31.897884   63427 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 20:22:31.897892   63427 pod_ready.go:39] duration metric: took 4m2.806165062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:31.897906   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:31.897930   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:31.897972   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:31.945643   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:31.945667   63427 cri.go:89] found id: ""
	I1009 20:22:31.945677   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:31.945720   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.949923   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:31.950018   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:31.989365   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:31.989391   63427 cri.go:89] found id: ""
	I1009 20:22:31.989401   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:31.989451   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.993865   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:31.993926   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:32.030658   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.030678   63427 cri.go:89] found id: ""
	I1009 20:22:32.030685   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:32.030731   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.034587   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:32.034647   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:32.078482   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.078508   63427 cri.go:89] found id: ""
	I1009 20:22:32.078516   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:32.078570   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.082565   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:32.082626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:32.118355   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.118379   63427 cri.go:89] found id: ""
	I1009 20:22:32.118388   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:32.118444   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.123110   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:32.123170   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:32.163052   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.163077   63427 cri.go:89] found id: ""
	I1009 20:22:32.163085   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:32.163137   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.167085   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:32.167146   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:32.201126   63427 cri.go:89] found id: ""
	I1009 20:22:32.201149   63427 logs.go:282] 0 containers: []
	W1009 20:22:32.201156   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:32.201161   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:32.201217   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:32.242235   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.242259   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.242265   63427 cri.go:89] found id: ""
	I1009 20:22:32.242274   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:32.242337   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.247127   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.250692   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:32.250712   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.301343   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:32.301368   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:32.347256   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:32.347283   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:32.485223   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:32.485263   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.530013   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:32.530054   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:32.580422   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:32.580447   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:32.625202   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:32.625237   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.664203   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:32.664230   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.701753   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:32.701782   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.741584   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:32.741610   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.779976   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:32.780003   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:32.848844   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:32.848875   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:32.871387   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:32.871416   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:35.836255   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:35.853510   63427 api_server.go:72] duration metric: took 4m14.501873287s to wait for apiserver process to appear ...
	I1009 20:22:35.853541   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:35.853583   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:35.853626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:35.889199   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:35.889228   63427 cri.go:89] found id: ""
	I1009 20:22:35.889237   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:35.889299   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.893644   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:35.893706   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:35.934151   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:35.934178   63427 cri.go:89] found id: ""
	I1009 20:22:35.934188   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:35.934244   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.938561   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:35.938618   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:35.974555   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:35.974579   63427 cri.go:89] found id: ""
	I1009 20:22:35.974588   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:35.974639   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.978468   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:35.978514   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:36.014292   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.014316   63427 cri.go:89] found id: ""
	I1009 20:22:36.014324   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:36.014366   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.018618   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:36.018672   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:36.059334   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.059366   63427 cri.go:89] found id: ""
	I1009 20:22:36.059377   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:36.059436   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.063552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:36.063612   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:36.098384   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.098404   63427 cri.go:89] found id: ""
	I1009 20:22:36.098413   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:36.098464   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.102428   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:36.102490   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:36.140422   63427 cri.go:89] found id: ""
	I1009 20:22:36.140451   63427 logs.go:282] 0 containers: []
	W1009 20:22:36.140461   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:36.140467   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:36.140524   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:36.178576   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.178600   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.178604   63427 cri.go:89] found id: ""
	I1009 20:22:36.178610   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:36.178662   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.183208   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.186971   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:36.186994   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.222365   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:36.222389   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:36.652499   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:36.652533   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:36.700493   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:36.700523   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:36.715630   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:36.715657   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:36.757738   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:36.757766   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:36.793469   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:36.793491   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.833374   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:36.833400   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.894545   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:36.894579   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.932407   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:36.932441   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.969165   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:36.969198   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:37.039100   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:37.039138   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:37.141855   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:37.141889   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.701118   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:22:39.705369   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:22:39.706731   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:39.706750   63427 api_server.go:131] duration metric: took 3.853202912s to wait for apiserver health ...
	I1009 20:22:39.706757   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:39.706777   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:39.706821   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:39.745203   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.745227   63427 cri.go:89] found id: ""
	I1009 20:22:39.745234   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:39.745277   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.749708   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:39.749768   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:39.786606   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:39.786629   63427 cri.go:89] found id: ""
	I1009 20:22:39.786637   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:39.786681   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.790981   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:39.791036   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:39.826615   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:39.826635   63427 cri.go:89] found id: ""
	I1009 20:22:39.826642   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:39.826710   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.831189   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:39.831260   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:39.867300   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:39.867320   63427 cri.go:89] found id: ""
	I1009 20:22:39.867327   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:39.867373   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.871552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:39.871606   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:39.905493   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:39.905513   63427 cri.go:89] found id: ""
	I1009 20:22:39.905521   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:39.905565   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.910653   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:39.910704   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:39.952830   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:39.952848   63427 cri.go:89] found id: ""
	I1009 20:22:39.952856   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:39.952901   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.957366   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:39.957434   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:39.993913   63427 cri.go:89] found id: ""
	I1009 20:22:39.993936   63427 logs.go:282] 0 containers: []
	W1009 20:22:39.993943   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:39.993949   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:39.993993   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:40.036654   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.036680   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.036685   63427 cri.go:89] found id: ""
	I1009 20:22:40.036694   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:40.036752   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.041168   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.045050   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:40.045073   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:40.059862   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:40.059890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:40.098698   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:40.098725   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:40.136003   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:40.136028   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:40.192473   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:40.192499   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.228548   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:40.228575   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:40.634922   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:40.634956   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:40.701278   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:40.701313   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:40.813881   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:40.813915   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:40.874590   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:40.874619   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:40.916558   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:40.916585   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:40.959294   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:40.959323   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.997037   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:40.997065   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:43.555901   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:43.555933   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.555941   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.555947   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.555953   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.555957   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.555962   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.555973   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.555982   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.555992   63427 system_pods.go:74] duration metric: took 3.849229039s to wait for pod list to return data ...
	I1009 20:22:43.556003   63427 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:43.558563   63427 default_sa.go:45] found service account: "default"
	I1009 20:22:43.558582   63427 default_sa.go:55] duration metric: took 2.571282ms for default service account to be created ...
	I1009 20:22:43.558590   63427 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:43.563017   63427 system_pods.go:86] 8 kube-system pods found
	I1009 20:22:43.563036   63427 system_pods.go:89] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.563041   63427 system_pods.go:89] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.563045   63427 system_pods.go:89] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.563049   63427 system_pods.go:89] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.563052   63427 system_pods.go:89] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.563056   63427 system_pods.go:89] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.563074   63427 system_pods.go:89] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.563082   63427 system_pods.go:89] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.563091   63427 system_pods.go:126] duration metric: took 4.493122ms to wait for k8s-apps to be running ...
	I1009 20:22:43.563101   63427 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:43.563148   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:43.579410   63427 system_svc.go:56] duration metric: took 16.301009ms WaitForService to wait for kubelet
	I1009 20:22:43.579435   63427 kubeadm.go:582] duration metric: took 4m22.227803615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:43.579456   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:43.582061   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:43.582083   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:43.582095   63427 node_conditions.go:105] duration metric: took 2.633714ms to run NodePressure ...
	I1009 20:22:43.582108   63427 start.go:241] waiting for startup goroutines ...
	I1009 20:22:43.582118   63427 start.go:246] waiting for cluster config update ...
	I1009 20:22:43.582137   63427 start.go:255] writing updated cluster config ...
	I1009 20:22:43.582415   63427 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:43.628249   63427 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:43.630230   63427 out.go:177] * Done! kubectl is now configured to use "no-preload-480205" cluster and "default" namespace by default
	I1009 20:23:45.402502   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:23:45.402618   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:23:45.404210   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:45.404308   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:45.404415   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:45.404554   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:45.404699   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:45.404776   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:45.406561   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:45.406656   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:45.406713   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:45.406832   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:45.406929   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:45.407025   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:45.407132   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:45.407247   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:45.407350   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:45.407466   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:45.407586   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:45.407659   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:45.407756   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:45.407850   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:45.407937   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:45.408016   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:45.408074   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:45.408202   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:45.408335   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:45.408407   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:45.408510   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:45.410040   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:45.410141   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:45.410231   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:45.410330   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:45.410409   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:45.410546   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:23:45.410589   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:23:45.410653   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.410810   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.410872   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411059   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411164   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411367   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411428   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411606   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411674   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411825   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411832   64287 kubeadm.go:310] 
	I1009 20:23:45.411865   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:23:45.411909   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:23:45.411928   64287 kubeadm.go:310] 
	I1009 20:23:45.411974   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:23:45.412018   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:23:45.412138   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:23:45.412155   64287 kubeadm.go:310] 
	I1009 20:23:45.412300   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:23:45.412344   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:23:45.412393   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:23:45.412400   64287 kubeadm.go:310] 
	I1009 20:23:45.412516   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:23:45.412618   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:23:45.412631   64287 kubeadm.go:310] 
	I1009 20:23:45.412764   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:23:45.412885   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:23:45.412996   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:23:45.413059   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:23:45.413078   64287 kubeadm.go:310] 
	W1009 20:23:45.413176   64287 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:23:45.413219   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:23:45.881931   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:23:45.897391   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:23:45.907598   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:23:45.907621   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:23:45.907668   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:23:45.917540   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:23:45.917585   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:23:45.927278   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:23:45.937054   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:23:45.937109   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:23:45.946544   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.956863   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:23:45.956901   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.966184   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:23:45.975335   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:23:45.975385   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:23:45.984552   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:23:46.063271   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:46.063380   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:46.213340   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:46.213511   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:46.213652   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:46.388334   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:46.390196   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:46.390303   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:46.390384   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:46.390499   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:46.390606   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:46.390710   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:46.390799   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:46.390899   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:46.390975   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:46.391097   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:46.391196   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:46.391268   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:46.391355   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:46.513116   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:46.906952   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:47.053715   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:47.184809   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:47.207139   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:47.208338   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:47.208424   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:47.362764   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:47.364703   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:47.364823   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:47.377925   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:47.379842   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:47.380533   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:47.382819   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:24:27.385438   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:24:27.385546   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:27.385726   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:32.386071   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:32.386268   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:42.386802   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:42.386979   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:02.388082   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:02.388300   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.388787   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:42.389021   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.389080   64287 kubeadm.go:310] 
	I1009 20:25:42.389329   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:25:42.389524   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:25:42.389545   64287 kubeadm.go:310] 
	I1009 20:25:42.389625   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:25:42.389680   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:25:42.389832   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:25:42.389846   64287 kubeadm.go:310] 
	I1009 20:25:42.389963   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:25:42.390019   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:25:42.390066   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:25:42.390081   64287 kubeadm.go:310] 
	I1009 20:25:42.390201   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:25:42.390312   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:25:42.390321   64287 kubeadm.go:310] 
	I1009 20:25:42.390438   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:25:42.390550   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:25:42.390671   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:25:42.390779   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:25:42.390791   64287 kubeadm.go:310] 
	I1009 20:25:42.391382   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:25:42.391507   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:25:42.391606   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:25:42.391673   64287 kubeadm.go:394] duration metric: took 7m57.392748571s to StartCluster
	I1009 20:25:42.391719   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:25:42.391785   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:25:42.439581   64287 cri.go:89] found id: ""
	I1009 20:25:42.439610   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.439621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:25:42.439628   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:25:42.439695   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:25:42.476205   64287 cri.go:89] found id: ""
	I1009 20:25:42.476231   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.476238   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:25:42.476243   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:25:42.476297   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:25:42.528317   64287 cri.go:89] found id: ""
	I1009 20:25:42.528342   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.528350   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:25:42.528356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:25:42.528413   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:25:42.564857   64287 cri.go:89] found id: ""
	I1009 20:25:42.564885   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.564893   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:25:42.564899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:25:42.564956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:25:42.600053   64287 cri.go:89] found id: ""
	I1009 20:25:42.600081   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.600088   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:25:42.600094   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:25:42.600146   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:25:42.636997   64287 cri.go:89] found id: ""
	I1009 20:25:42.637026   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.637034   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:25:42.637047   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:25:42.637107   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:25:42.672228   64287 cri.go:89] found id: ""
	I1009 20:25:42.672255   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.672266   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:25:42.672273   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:25:42.672331   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:25:42.711696   64287 cri.go:89] found id: ""
	I1009 20:25:42.711727   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.711737   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:25:42.711749   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:25:42.711764   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:25:42.764839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:25:42.764876   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:25:42.778484   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:25:42.778512   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:25:42.864830   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:25:42.864859   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:25:42.864874   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:25:42.975355   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:25:42.975389   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:25:43.015247   64287 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:25:43.015307   64287 out.go:270] * 
	W1009 20:25:43.015375   64287 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.015392   64287 out.go:270] * 
	W1009 20:25:43.016664   64287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:25:43.020135   64287 out.go:201] 
	W1009 20:25:43.021388   64287 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.021427   64287 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:25:43.021453   64287 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:25:43.022804   64287 out.go:201] 
	
	
	==> CRI-O <==
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.055710043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505869055687110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04532a1a-5c6e-487b-9717-c8406f14e5bf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.056318041Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c1ae05c-780d-4224-b227-be30571732d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.056393941Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c1ae05c-780d-4224-b227-be30571732d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.056612074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a,PodSandboxId:8f5916641bbd938fe668950ba48af3fe0e70037be279612e3a6bb7129951864c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318891359726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sttbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 453ffb79-d6d0-4ba4-baf6-cbc00df68cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db,PodSandboxId:da48c0dc35de9e83a87164ada9aff8fafd3665d0f39ed483a251fa727e03f63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318834041175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j62fb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ecbf7b08-3855-42ca-a144-2cada67e9d09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495,PodSandboxId:d92c670b7744760cad7029d9660c90c33a37327966d1a068234cb0bbee6bce88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728505318281342598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13817757-5de5-44be-9976-cb3bda284db8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7,PodSandboxId:22876380557a1cb5ae5857dbcf5b0eaab8a4824c7a1a43d7850dffe18a7c376c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728505316999122240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4sqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e699a0fc-e2f4-45b5-960b-54c2a4a35b87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515,PodSandboxId:b2bde796ac664c08376ff5ea5b551fde11248497ad63c9b9e5d1b7f2445665ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505306284544768,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52e5f8a8a8412af6c7521c2cc02f7ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc,PodSandboxId:d5072486eae960ecec1d6e0001e142334b8489d3618703126fab65c5432a6da3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505306239463143,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53,PodSandboxId:62eaadee2c3caa001429ab2a7ad1e7aca0fba67312de5c7ef6c21c7d26dff5ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505306259587642,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c3a1b12584c3dd3cb34e987e1c30d14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690,PodSandboxId:93886a9a91159df8564905e2469ae0a0127b9456c544f6b2b1ad01b3c4047116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505306171197804,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f8f73ffb2fe0d46539c0a20375dc20,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76,PodSandboxId:caa0e92eb3b98fc5487ce1aa790042a4ae596a43e51cccb868700fa7a3b89325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505018488151298,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c1ae05c-780d-4224-b227-be30571732d2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.094885764Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf199934-55eb-4145-bfc2-67089072eb56 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.095023521Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf199934-55eb-4145-bfc2-67089072eb56 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.096334583Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa627694-65b7-458a-b118-bda6c00ed0e7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.096713088Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505869096692942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa627694-65b7-458a-b118-bda6c00ed0e7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.097451613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48c84e81-7269-4629-9d33-1cd5afa0e873 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.097499089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48c84e81-7269-4629-9d33-1cd5afa0e873 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.097721571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a,PodSandboxId:8f5916641bbd938fe668950ba48af3fe0e70037be279612e3a6bb7129951864c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318891359726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sttbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 453ffb79-d6d0-4ba4-baf6-cbc00df68cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db,PodSandboxId:da48c0dc35de9e83a87164ada9aff8fafd3665d0f39ed483a251fa727e03f63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318834041175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j62fb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ecbf7b08-3855-42ca-a144-2cada67e9d09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495,PodSandboxId:d92c670b7744760cad7029d9660c90c33a37327966d1a068234cb0bbee6bce88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728505318281342598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13817757-5de5-44be-9976-cb3bda284db8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7,PodSandboxId:22876380557a1cb5ae5857dbcf5b0eaab8a4824c7a1a43d7850dffe18a7c376c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728505316999122240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4sqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e699a0fc-e2f4-45b5-960b-54c2a4a35b87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515,PodSandboxId:b2bde796ac664c08376ff5ea5b551fde11248497ad63c9b9e5d1b7f2445665ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505306284544768,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52e5f8a8a8412af6c7521c2cc02f7ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc,PodSandboxId:d5072486eae960ecec1d6e0001e142334b8489d3618703126fab65c5432a6da3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505306239463143,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53,PodSandboxId:62eaadee2c3caa001429ab2a7ad1e7aca0fba67312de5c7ef6c21c7d26dff5ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505306259587642,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c3a1b12584c3dd3cb34e987e1c30d14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690,PodSandboxId:93886a9a91159df8564905e2469ae0a0127b9456c544f6b2b1ad01b3c4047116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505306171197804,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f8f73ffb2fe0d46539c0a20375dc20,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76,PodSandboxId:caa0e92eb3b98fc5487ce1aa790042a4ae596a43e51cccb868700fa7a3b89325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505018488151298,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48c84e81-7269-4629-9d33-1cd5afa0e873 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.135248965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=782b8ccf-8a9c-4224-af55-2edd575b15fa name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.135338349Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=782b8ccf-8a9c-4224-af55-2edd575b15fa name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.136473087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8622af4-7d8b-4b27-9b33-9958dd39cc7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.136910230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505869136850561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8622af4-7d8b-4b27-9b33-9958dd39cc7f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.137442312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=099e9869-e3eb-4676-97d2-160c94e9e664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.137508234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=099e9869-e3eb-4676-97d2-160c94e9e664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.137719267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a,PodSandboxId:8f5916641bbd938fe668950ba48af3fe0e70037be279612e3a6bb7129951864c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318891359726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sttbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 453ffb79-d6d0-4ba4-baf6-cbc00df68cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db,PodSandboxId:da48c0dc35de9e83a87164ada9aff8fafd3665d0f39ed483a251fa727e03f63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318834041175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j62fb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ecbf7b08-3855-42ca-a144-2cada67e9d09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495,PodSandboxId:d92c670b7744760cad7029d9660c90c33a37327966d1a068234cb0bbee6bce88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728505318281342598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13817757-5de5-44be-9976-cb3bda284db8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7,PodSandboxId:22876380557a1cb5ae5857dbcf5b0eaab8a4824c7a1a43d7850dffe18a7c376c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728505316999122240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4sqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e699a0fc-e2f4-45b5-960b-54c2a4a35b87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515,PodSandboxId:b2bde796ac664c08376ff5ea5b551fde11248497ad63c9b9e5d1b7f2445665ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505306284544768,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52e5f8a8a8412af6c7521c2cc02f7ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc,PodSandboxId:d5072486eae960ecec1d6e0001e142334b8489d3618703126fab65c5432a6da3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505306239463143,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53,PodSandboxId:62eaadee2c3caa001429ab2a7ad1e7aca0fba67312de5c7ef6c21c7d26dff5ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505306259587642,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c3a1b12584c3dd3cb34e987e1c30d14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690,PodSandboxId:93886a9a91159df8564905e2469ae0a0127b9456c544f6b2b1ad01b3c4047116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505306171197804,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f8f73ffb2fe0d46539c0a20375dc20,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76,PodSandboxId:caa0e92eb3b98fc5487ce1aa790042a4ae596a43e51cccb868700fa7a3b89325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505018488151298,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=099e9869-e3eb-4676-97d2-160c94e9e664 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.177384751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5c475a54-7db2-47d9-9d87-4e38ea3f41ff name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.177457372Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5c475a54-7db2-47d9-9d87-4e38ea3f41ff name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.179148986Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b95da0e-c5eb-41cb-b088-bf9903186dc0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.179657009Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505869179634494,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b95da0e-c5eb-41cb-b088-bf9903186dc0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.180425543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e54d621-87be-4c25-9b64-bd6d9b7bd61a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.180518196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e54d621-87be-4c25-9b64-bd6d9b7bd61a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:09 embed-certs-503330 crio[704]: time="2024-10-09 20:31:09.180727013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a,PodSandboxId:8f5916641bbd938fe668950ba48af3fe0e70037be279612e3a6bb7129951864c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318891359726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sttbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 453ffb79-d6d0-4ba4-baf6-cbc00df68cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db,PodSandboxId:da48c0dc35de9e83a87164ada9aff8fafd3665d0f39ed483a251fa727e03f63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318834041175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j62fb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ecbf7b08-3855-42ca-a144-2cada67e9d09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495,PodSandboxId:d92c670b7744760cad7029d9660c90c33a37327966d1a068234cb0bbee6bce88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728505318281342598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13817757-5de5-44be-9976-cb3bda284db8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7,PodSandboxId:22876380557a1cb5ae5857dbcf5b0eaab8a4824c7a1a43d7850dffe18a7c376c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728505316999122240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4sqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e699a0fc-e2f4-45b5-960b-54c2a4a35b87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515,PodSandboxId:b2bde796ac664c08376ff5ea5b551fde11248497ad63c9b9e5d1b7f2445665ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505306284544768,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52e5f8a8a8412af6c7521c2cc02f7ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc,PodSandboxId:d5072486eae960ecec1d6e0001e142334b8489d3618703126fab65c5432a6da3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505306239463143,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53,PodSandboxId:62eaadee2c3caa001429ab2a7ad1e7aca0fba67312de5c7ef6c21c7d26dff5ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505306259587642,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c3a1b12584c3dd3cb34e987e1c30d14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690,PodSandboxId:93886a9a91159df8564905e2469ae0a0127b9456c544f6b2b1ad01b3c4047116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505306171197804,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f8f73ffb2fe0d46539c0a20375dc20,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76,PodSandboxId:caa0e92eb3b98fc5487ce1aa790042a4ae596a43e51cccb868700fa7a3b89325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505018488151298,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e54d621-87be-4c25-9b64-bd6d9b7bd61a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f54c3ad65ef8d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   8f5916641bbd9       coredns-7c65d6cfc9-sttbg
	0929c43db517c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   da48c0dc35de9       coredns-7c65d6cfc9-j62fb
	a4b1466595b03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d92c670b77447       storage-provisioner
	f0fe16f40d36b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   22876380557a1       kube-proxy-k4sqz
	690ad9c304dde       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   b2bde796ac664       etcd-embed-certs-503330
	e84e79116fa9d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   62eaadee2c3ca       kube-scheduler-embed-certs-503330
	48c2502451c29       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   d5072486eae96       kube-apiserver-embed-certs-503330
	a4c55d4cc5526       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   93886a9a91159       kube-controller-manager-embed-certs-503330
	6c6d9ae1a9bc9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   caa0e92eb3b98       kube-apiserver-embed-certs-503330
	
	
	==> coredns [0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-503330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-503330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=embed-certs-503330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T20_21_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:21:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-503330
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:31:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:27:08 +0000   Wed, 09 Oct 2024 20:21:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:27:08 +0000   Wed, 09 Oct 2024 20:21:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:27:08 +0000   Wed, 09 Oct 2024 20:21:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:27:08 +0000   Wed, 09 Oct 2024 20:21:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.97
	  Hostname:    embed-certs-503330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4243dbb5d07040f1ad6a69aba7094125
	  System UUID:                4243dbb5-d070-40f1-ad6a-69aba7094125
	  Boot ID:                    ddf6df5b-081d-4a26-9b14-4a310973fe13
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-j62fb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-sttbg                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-embed-certs-503330                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-embed-certs-503330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-503330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-k4sqz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-embed-certs-503330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-79m5x               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node embed-certs-503330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node embed-certs-503330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node embed-certs-503330 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node embed-certs-503330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node embed-certs-503330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node embed-certs-503330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node embed-certs-503330 event: Registered Node embed-certs-503330 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050452] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040101] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.844461] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556668] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.610089] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.746912] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.112208] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.169664] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.166908] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.306774] systemd-fstab-generator[694]: Ignoring "noauto" option for root device
	[  +4.025426] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +1.990110] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +0.067524] kauditd_printk_skb: 158 callbacks suppressed
	[Oct 9 20:17] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.817807] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 9 20:21] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.405960] systemd-fstab-generator[2567]: Ignoring "noauto" option for root device
	[  +4.639817] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.432243] systemd-fstab-generator[2893]: Ignoring "noauto" option for root device
	[  +5.832776] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.061212] systemd-fstab-generator[3033]: Ignoring "noauto" option for root device
	[Oct 9 20:22] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515] <==
	{"level":"info","ts":"2024-10-09T20:21:46.708399Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-09T20:21:46.708607Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"1f2cc3497df204b1","initial-advertise-peer-urls":["https://192.168.50.97:2380"],"listen-peer-urls":["https://192.168.50.97:2380"],"advertise-client-urls":["https://192.168.50.97:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.97:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-09T20:21:46.708647Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-09T20:21:46.712392Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.97:2380"}
	{"level":"info","ts":"2024-10-09T20:21:46.712444Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.97:2380"}
	{"level":"info","ts":"2024-10-09T20:21:47.313054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-09T20:21:47.313217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-09T20:21:47.313269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 received MsgPreVoteResp from 1f2cc3497df204b1 at term 1"}
	{"level":"info","ts":"2024-10-09T20:21:47.313300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 became candidate at term 2"}
	{"level":"info","ts":"2024-10-09T20:21:47.313336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 received MsgVoteResp from 1f2cc3497df204b1 at term 2"}
	{"level":"info","ts":"2024-10-09T20:21:47.313365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1f2cc3497df204b1 became leader at term 2"}
	{"level":"info","ts":"2024-10-09T20:21:47.313391Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1f2cc3497df204b1 elected leader 1f2cc3497df204b1 at term 2"}
	{"level":"info","ts":"2024-10-09T20:21:47.315191Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:21:47.316212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1f2cc3497df204b1","local-member-attributes":"{Name:embed-certs-503330 ClientURLs:[https://192.168.50.97:2379]}","request-path":"/0/members/1f2cc3497df204b1/attributes","cluster-id":"a36d2e63d2f8b676","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:21:47.316378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:21:47.316754Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a36d2e63d2f8b676","local-member-id":"1f2cc3497df204b1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:21:47.316849Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:21:47.316873Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:21:47.316898Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:21:47.316924Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:21:47.316932Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:21:47.318177Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:21:47.319370Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T20:21:47.322546Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:21:47.323264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.97:2379"}
	
	
	==> kernel <==
	 20:31:09 up 14 min,  0 users,  load average: 0.21, 0.12, 0.10
	Linux embed-certs-503330 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc] <==
	W1009 20:26:49.819736       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:26:49.819792       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:26:49.820940       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:26:49.821002       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:27:49.821894       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:27:49.822024       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1009 20:27:49.822120       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:27:49.822152       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:27:49.823159       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:27:49.823194       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:29:49.824128       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:29:49.824488       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1009 20:29:49.824587       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:29:49.824620       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 20:29:49.825780       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:29:49.825840       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76] <==
	W1009 20:21:38.670273       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.676832       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.683520       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.724639       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.768849       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.807314       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.813834       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.903275       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.919264       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.962072       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.992942       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:39.011923       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:39.020622       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:39.049288       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:39.203365       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:42.487057       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:42.847644       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:42.873443       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.005351       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.095099       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.178352       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.280463       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.306273       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.333778       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.408696       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690] <==
	E1009 20:25:55.609619       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:25:56.257799       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:26:25.616295       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:26:26.266003       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:26:55.622424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:26:56.273783       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:27:08.673396       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-503330"
	E1009 20:27:25.627954       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:27:26.287394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:27:49.383857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="617.165µs"
	E1009 20:27:55.634615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:27:56.296214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:28:03.385754       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="164.238µs"
	E1009 20:28:25.641090       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:28:26.304406       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:28:55.648279       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:28:56.311385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:29:25.654067       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:29:26.323022       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:29:55.661351       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:29:56.330546       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:30:25.667610       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:30:26.339902       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:30:55.674102       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:30:56.348721       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:21:57.263796       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:21:57.273819       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.97"]
	E1009 20:21:57.274014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:21:57.355199       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:21:57.355239       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:21:57.355269       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:21:57.358020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:21:57.358274       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:21:57.358286       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:21:57.360401       1 config.go:199] "Starting service config controller"
	I1009 20:21:57.360435       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:21:57.360463       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:21:57.360470       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:21:57.366722       1 config.go:328] "Starting node config controller"
	I1009 20:21:57.366737       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:21:57.462183       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 20:21:57.462250       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:21:57.468485       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53] <==
	W1009 20:21:48.848919       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:48.849027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.685846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:21:49.685913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.700434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:21:49.700499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.713865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:49.714205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.737208       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:21:49.737238       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 20:21:49.793303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:21:49.793632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.803322       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 20:21:49.803415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.832192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:21:49.832312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.950353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:49.950483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.999430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:50.000883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:50.043680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:50.043923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:50.102040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:21:50.102119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1009 20:21:52.437260       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:30:01 embed-certs-503330 kubelet[2900]: E1009 20:30:01.501528    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505801501144615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:01 embed-certs-503330 kubelet[2900]: E1009 20:30:01.501566    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505801501144615,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:08 embed-certs-503330 kubelet[2900]: E1009 20:30:08.365383    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:30:11 embed-certs-503330 kubelet[2900]: E1009 20:30:11.502597    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505811502378608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:11 embed-certs-503330 kubelet[2900]: E1009 20:30:11.502622    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505811502378608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:20 embed-certs-503330 kubelet[2900]: E1009 20:30:20.365493    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:30:21 embed-certs-503330 kubelet[2900]: E1009 20:30:21.503936    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505821503654704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:21 embed-certs-503330 kubelet[2900]: E1009 20:30:21.504300    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505821503654704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:31 embed-certs-503330 kubelet[2900]: E1009 20:30:31.505523    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505831505263674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:31 embed-certs-503330 kubelet[2900]: E1009 20:30:31.505555    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505831505263674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:32 embed-certs-503330 kubelet[2900]: E1009 20:30:32.364457    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:30:41 embed-certs-503330 kubelet[2900]: E1009 20:30:41.508934    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505841508387228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:41 embed-certs-503330 kubelet[2900]: E1009 20:30:41.509048    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505841508387228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:43 embed-certs-503330 kubelet[2900]: E1009 20:30:43.364284    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:30:51 embed-certs-503330 kubelet[2900]: E1009 20:30:51.390804    2900 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 20:30:51 embed-certs-503330 kubelet[2900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 20:30:51 embed-certs-503330 kubelet[2900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 20:30:51 embed-certs-503330 kubelet[2900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 20:30:51 embed-certs-503330 kubelet[2900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 20:30:51 embed-certs-503330 kubelet[2900]: E1009 20:30:51.510759    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505851510311086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:51 embed-certs-503330 kubelet[2900]: E1009 20:30:51.510781    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505851510311086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:57 embed-certs-503330 kubelet[2900]: E1009 20:30:57.366946    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:31:01 embed-certs-503330 kubelet[2900]: E1009 20:31:01.512184    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505861511790244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:01 embed-certs-503330 kubelet[2900]: E1009 20:31:01.512654    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505861511790244,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:09 embed-certs-503330 kubelet[2900]: E1009 20:31:09.368068    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	
	
	==> storage-provisioner [a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495] <==
	I1009 20:21:58.406578       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:21:58.418393       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:21:58.418573       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:21:58.443273       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:21:58.449775       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"528b0580-21de-4f83-ac54-e262fc998faf", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-503330_981d94a8-6d60-477c-a2cd-0638367cb7ae became leader
	I1009 20:21:58.450059       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-503330_981d94a8-6d60-477c-a2cd-0638367cb7ae!
	I1009 20:21:58.550420       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-503330_981d94a8-6d60-477c-a2cd-0638367cb7ae!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-503330 -n embed-certs-503330
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-503330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-79m5x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-503330 describe pod metrics-server-6867b74b74-79m5x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-503330 describe pod metrics-server-6867b74b74-79m5x: exit status 1 (64.699596ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-79m5x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-503330 describe pod metrics-server-6867b74b74-79m5x: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-09 20:31:28.98427607 +0000 UTC m=+6299.974048987
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-733270 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-733270 logs -n 25: (2.025211634s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-790037                           | kubernetes-upgrade-790037    | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:07 UTC |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-615869 sudo                            | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-615869                                 | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:08 UTC |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-480205             | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-324052 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | disable-driver-mounts-324052                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:10 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503330            | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC | 09 Oct 24 20:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-733270  | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-480205                  | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169021        | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-503330                 | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-733270       | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:22 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169021             | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:13:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:13:44.614940   64287 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:13:44.615052   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615076   64287 out.go:358] Setting ErrFile to fd 2...
	I1009 20:13:44.615081   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615239   64287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:13:44.615728   64287 out.go:352] Setting JSON to false
	I1009 20:13:44.616598   64287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6966,"bootTime":1728497859,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:13:44.616678   64287 start.go:139] virtualization: kvm guest
	I1009 20:13:44.618709   64287 out.go:177] * [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:13:44.619813   64287 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:13:44.619841   64287 notify.go:220] Checking for updates...
	I1009 20:13:44.621876   64287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:13:44.623226   64287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:13:44.624576   64287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:13:44.625863   64287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:13:44.627027   64287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:13:44.628559   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:13:44.628948   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.629014   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.644138   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I1009 20:13:44.644537   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.645045   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.645067   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.645380   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.645557   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.647115   64287 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 20:13:44.648228   64287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:13:44.648491   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.648529   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.663211   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I1009 20:13:44.663674   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.664164   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.664192   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.664482   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.664648   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.697395   64287 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:13:44.698580   64287 start.go:297] selected driver: kvm2
	I1009 20:13:44.698591   64287 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.698719   64287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:13:44.699437   64287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.699521   64287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:13:44.713190   64287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:13:44.713567   64287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:13:44.713600   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:13:44.713640   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:13:44.713673   64287 start.go:340] cluster config:
	{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.713805   64287 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.716209   64287 out.go:177] * Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	I1009 20:13:44.717364   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:13:44.717399   64287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:13:44.717409   64287 cache.go:56] Caching tarball of preloaded images
	I1009 20:13:44.717485   64287 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:13:44.717495   64287 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:13:44.717594   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:13:44.717753   64287 start.go:360] acquireMachinesLock for old-k8s-version-169021: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:13:48.943307   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:52.015296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:58.095330   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:01.167322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:07.247325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:10.323296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:16.399318   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:19.471371   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:25.551279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:28.623322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:34.703301   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:37.775281   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:43.855344   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:46.927300   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:53.007389   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:56.079332   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:02.159290   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:05.231351   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:11.311339   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:14.383289   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:20.463287   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:23.535402   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:29.615312   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:32.687319   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:38.767323   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:41.839306   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:47.919325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:50.991292   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:57.071390   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:00.143404   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:06.223291   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:09.295298   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:15.375349   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:18.447271   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:24.527327   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:27.599279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:30.604005   63744 start.go:364] duration metric: took 3m52.142985964s to acquireMachinesLock for "embed-certs-503330"
	I1009 20:16:30.604068   63744 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:30.604076   63744 fix.go:54] fixHost starting: 
	I1009 20:16:30.604520   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:30.604571   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:30.620743   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I1009 20:16:30.621433   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:30.621936   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:16:30.621961   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:30.622323   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:30.622490   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:30.622654   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:16:30.624257   63744 fix.go:112] recreateIfNeeded on embed-certs-503330: state=Stopped err=<nil>
	I1009 20:16:30.624295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	W1009 20:16:30.624542   63744 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:30.627103   63744 out.go:177] * Restarting existing kvm2 VM for "embed-certs-503330" ...
	I1009 20:16:30.601719   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:30.601759   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602048   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:16:30.602078   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602263   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:16:30.603862   63427 machine.go:96] duration metric: took 4m37.428982059s to provisionDockerMachine
	I1009 20:16:30.603905   63427 fix.go:56] duration metric: took 4m37.449834405s for fixHost
	I1009 20:16:30.603915   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 4m37.449856097s
	W1009 20:16:30.603942   63427 start.go:714] error starting host: provision: host is not running
	W1009 20:16:30.604043   63427 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1009 20:16:30.604052   63427 start.go:729] Will try again in 5 seconds ...
	I1009 20:16:30.628558   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Start
	I1009 20:16:30.628718   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring networks are active...
	I1009 20:16:30.629440   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network default is active
	I1009 20:16:30.629760   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network mk-embed-certs-503330 is active
	I1009 20:16:30.630197   63744 main.go:141] libmachine: (embed-certs-503330) Getting domain xml...
	I1009 20:16:30.630952   63744 main.go:141] libmachine: (embed-certs-503330) Creating domain...
	I1009 20:16:31.808982   63744 main.go:141] libmachine: (embed-certs-503330) Waiting to get IP...
	I1009 20:16:31.809856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:31.810317   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:31.810463   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:31.810307   64895 retry.go:31] will retry after 287.246953ms: waiting for machine to come up
	I1009 20:16:32.098815   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.099474   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.099513   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.099422   64895 retry.go:31] will retry after 323.155152ms: waiting for machine to come up
	I1009 20:16:32.424145   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.424618   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.424646   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.424576   64895 retry.go:31] will retry after 410.947245ms: waiting for machine to come up
	I1009 20:16:32.837351   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.837773   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.837823   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.837735   64895 retry.go:31] will retry after 562.56411ms: waiting for machine to come up
	I1009 20:16:35.605597   63427 start.go:360] acquireMachinesLock for no-preload-480205: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:16:33.401377   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.401828   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.401877   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.401781   64895 retry.go:31] will retry after 460.104327ms: waiting for machine to come up
	I1009 20:16:33.863457   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.863854   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.863880   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.863815   64895 retry.go:31] will retry after 668.516186ms: waiting for machine to come up
	I1009 20:16:34.533619   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:34.534019   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:34.534054   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:34.533954   64895 retry.go:31] will retry after 966.757544ms: waiting for machine to come up
	I1009 20:16:35.501805   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:35.502178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:35.502200   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:35.502137   64895 retry.go:31] will retry after 1.017669155s: waiting for machine to come up
	I1009 20:16:36.521729   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:36.522150   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:36.522178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:36.522115   64895 retry.go:31] will retry after 1.292799206s: waiting for machine to come up
	I1009 20:16:37.816782   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:37.817187   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:37.817207   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:37.817156   64895 retry.go:31] will retry after 2.202935241s: waiting for machine to come up
	I1009 20:16:40.022666   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:40.023072   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:40.023101   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:40.023030   64895 retry.go:31] will retry after 2.360885318s: waiting for machine to come up
	I1009 20:16:42.385530   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:42.385947   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:42.385976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:42.385909   64895 retry.go:31] will retry after 2.1999082s: waiting for machine to come up
	I1009 20:16:44.588258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:44.588617   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:44.588649   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:44.588581   64895 retry.go:31] will retry after 3.345984614s: waiting for machine to come up
	I1009 20:16:47.937287   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937758   63744 main.go:141] libmachine: (embed-certs-503330) Found IP for machine: 192.168.50.97
	I1009 20:16:47.937785   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has current primary IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937790   63744 main.go:141] libmachine: (embed-certs-503330) Reserving static IP address...
	I1009 20:16:47.938195   63744 main.go:141] libmachine: (embed-certs-503330) Reserved static IP address: 192.168.50.97
	I1009 20:16:47.938231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.938241   63744 main.go:141] libmachine: (embed-certs-503330) Waiting for SSH to be available...
	I1009 20:16:47.938266   63744 main.go:141] libmachine: (embed-certs-503330) DBG | skip adding static IP to network mk-embed-certs-503330 - found existing host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"}
	I1009 20:16:47.938279   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Getting to WaitForSSH function...
	I1009 20:16:47.940214   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940468   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.940499   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940570   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH client type: external
	I1009 20:16:47.940605   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa (-rw-------)
	I1009 20:16:47.940639   63744 main.go:141] libmachine: (embed-certs-503330) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:16:47.940654   63744 main.go:141] libmachine: (embed-certs-503330) DBG | About to run SSH command:
	I1009 20:16:47.940660   63744 main.go:141] libmachine: (embed-certs-503330) DBG | exit 0
	I1009 20:16:48.066973   63744 main.go:141] libmachine: (embed-certs-503330) DBG | SSH cmd err, output: <nil>: 
	I1009 20:16:48.067404   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetConfigRaw
	I1009 20:16:48.068009   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.070587   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.070969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.070998   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.071241   63744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/config.json ...
	I1009 20:16:48.071426   63744 machine.go:93] provisionDockerMachine start ...
	I1009 20:16:48.071443   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:48.071655   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.074102   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.074448   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074560   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.074721   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074872   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074989   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.075156   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.075346   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.075358   63744 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:16:48.187275   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:16:48.187302   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187600   63744 buildroot.go:166] provisioning hostname "embed-certs-503330"
	I1009 20:16:48.187624   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187763   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.190220   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190585   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.190606   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190736   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.190932   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191110   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191251   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.191400   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.191608   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.191629   63744 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-503330 && echo "embed-certs-503330" | sudo tee /etc/hostname
	I1009 20:16:48.321932   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-503330
	
	I1009 20:16:48.321961   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.324976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.325393   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325542   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.325720   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.325856   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.326024   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.326360   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.326546   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.326570   63744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-503330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-503330/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-503330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:16:49.299713   64109 start.go:364] duration metric: took 3m11.699715872s to acquireMachinesLock for "default-k8s-diff-port-733270"
	I1009 20:16:49.299779   64109 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:49.299788   64109 fix.go:54] fixHost starting: 
	I1009 20:16:49.300158   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:49.300205   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:49.319769   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1009 20:16:49.320201   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:49.320678   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:16:49.320704   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:49.321107   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:49.321301   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:16:49.321463   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:16:49.322908   64109 fix.go:112] recreateIfNeeded on default-k8s-diff-port-733270: state=Stopped err=<nil>
	I1009 20:16:49.322943   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	W1009 20:16:49.323098   64109 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:49.324952   64109 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-733270" ...
	I1009 20:16:48.448176   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:48.448210   63744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:16:48.448243   63744 buildroot.go:174] setting up certificates
	I1009 20:16:48.448254   63744 provision.go:84] configureAuth start
	I1009 20:16:48.448267   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.448531   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.450984   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451384   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.451422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451479   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.453759   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454080   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.454106   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454202   63744 provision.go:143] copyHostCerts
	I1009 20:16:48.454273   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:16:48.454283   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:16:48.454362   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:16:48.454505   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:16:48.454517   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:16:48.454565   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:16:48.454650   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:16:48.454660   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:16:48.454696   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:16:48.454767   63744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.embed-certs-503330 san=[127.0.0.1 192.168.50.97 embed-certs-503330 localhost minikube]
	I1009 20:16:48.669251   63744 provision.go:177] copyRemoteCerts
	I1009 20:16:48.669335   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:16:48.669373   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.671969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.672258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.672629   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.672739   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.672856   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:48.756869   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:16:48.781853   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:16:48.805746   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:16:48.828729   63744 provision.go:87] duration metric: took 380.461988ms to configureAuth
	I1009 20:16:48.828774   63744 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:16:48.828972   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:16:48.829053   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.831590   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.831874   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.831896   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.832085   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.832273   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832411   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832545   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.832664   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.832906   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.832928   63744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:16:49.057643   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:16:49.057673   63744 machine.go:96] duration metric: took 986.233627ms to provisionDockerMachine
	I1009 20:16:49.057686   63744 start.go:293] postStartSetup for "embed-certs-503330" (driver="kvm2")
	I1009 20:16:49.057697   63744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:16:49.057713   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.057985   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:16:49.058013   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.060943   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061314   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.061336   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061544   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.061732   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.061891   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.062024   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.145757   63744 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:16:49.150378   63744 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:16:49.150407   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:16:49.150486   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:16:49.150589   63744 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:16:49.150697   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:16:49.160318   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:49.184297   63744 start.go:296] duration metric: took 126.596407ms for postStartSetup
	I1009 20:16:49.184337   63744 fix.go:56] duration metric: took 18.580262238s for fixHost
	I1009 20:16:49.184374   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.186720   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187020   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.187043   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187243   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.187435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187571   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187689   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.187812   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:49.187993   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:49.188005   63744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:16:49.299573   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505009.274901835
	
	I1009 20:16:49.299591   63744 fix.go:216] guest clock: 1728505009.274901835
	I1009 20:16:49.299610   63744 fix.go:229] Guest: 2024-10-09 20:16:49.274901835 +0000 UTC Remote: 2024-10-09 20:16:49.184353734 +0000 UTC m=+250.856887553 (delta=90.548101ms)
	I1009 20:16:49.299639   63744 fix.go:200] guest clock delta is within tolerance: 90.548101ms
	I1009 20:16:49.299644   63744 start.go:83] releasing machines lock for "embed-certs-503330", held for 18.695596427s
	I1009 20:16:49.299671   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.299949   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:49.302951   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303308   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.303337   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303494   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.303952   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304100   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304164   63744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:16:49.304213   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.304273   63744 ssh_runner.go:195] Run: cat /version.json
	I1009 20:16:49.304295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.306543   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306817   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.306856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306901   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307010   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307196   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307365   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.307387   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.307404   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307518   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.307612   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307778   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307974   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.308128   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.410624   63744 ssh_runner.go:195] Run: systemctl --version
	I1009 20:16:49.418412   63744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:16:49.567318   63744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:16:49.573238   63744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:16:49.573326   63744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:16:49.589269   63744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:16:49.589292   63744 start.go:495] detecting cgroup driver to use...
	I1009 20:16:49.589361   63744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:16:49.606654   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:16:49.621200   63744 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:16:49.621253   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:16:49.635346   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:16:49.649294   63744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:16:49.764096   63744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:16:49.892568   63744 docker.go:233] disabling docker service ...
	I1009 20:16:49.892650   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:16:49.907527   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:16:49.920395   63744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:16:50.067177   63744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:16:50.222407   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:16:50.236968   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:16:50.257005   63744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:16:50.257058   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.269955   63744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:16:50.270011   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.282633   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.296259   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.307683   63744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:16:50.320174   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.331518   63744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.350124   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.361327   63744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:16:50.371637   63744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:16:50.371707   63744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:16:50.385652   63744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:16:50.395762   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:50.521257   63744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:16:50.631377   63744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:16:50.631447   63744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:16:50.636594   63744 start.go:563] Will wait 60s for crictl version
	I1009 20:16:50.636643   63744 ssh_runner.go:195] Run: which crictl
	I1009 20:16:50.640677   63744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:16:50.693612   63744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:16:50.693695   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.724735   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.755820   63744 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:16:49.326372   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Start
	I1009 20:16:49.326507   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring networks are active...
	I1009 20:16:49.327206   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network default is active
	I1009 20:16:49.327553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network mk-default-k8s-diff-port-733270 is active
	I1009 20:16:49.327882   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Getting domain xml...
	I1009 20:16:49.328531   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Creating domain...
	I1009 20:16:50.594895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting to get IP...
	I1009 20:16:50.595715   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596086   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596183   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.596074   65019 retry.go:31] will retry after 205.766462ms: waiting for machine to come up
	I1009 20:16:50.803483   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.803974   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.804004   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.803914   65019 retry.go:31] will retry after 357.132949ms: waiting for machine to come up
	I1009 20:16:51.162582   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163122   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163163   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.163072   65019 retry.go:31] will retry after 316.280977ms: waiting for machine to come up
	I1009 20:16:51.480560   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481080   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481107   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.481029   65019 retry.go:31] will retry after 498.455228ms: waiting for machine to come up
	I1009 20:16:51.980618   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981136   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981165   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.981099   65019 retry.go:31] will retry after 595.314117ms: waiting for machine to come up
	I1009 20:16:50.757146   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:50.759889   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760334   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:50.760365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760613   63744 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 20:16:50.764810   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:50.777746   63744 kubeadm.go:883] updating cluster {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:16:50.777862   63744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:16:50.777926   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:50.816658   63744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:16:50.816722   63744 ssh_runner.go:195] Run: which lz4
	I1009 20:16:50.820880   63744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:16:50.825586   63744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:16:50.825614   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:16:52.206757   63744 crio.go:462] duration metric: took 1.385906608s to copy over tarball
	I1009 20:16:52.206837   63744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:16:52.577801   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578322   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578346   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:52.578269   65019 retry.go:31] will retry after 872.123349ms: waiting for machine to come up
	I1009 20:16:53.452602   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453038   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453068   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:53.452984   65019 retry.go:31] will retry after 727.985471ms: waiting for machine to come up
	I1009 20:16:54.182823   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:54.183181   65019 retry.go:31] will retry after 1.366580369s: waiting for machine to come up
	I1009 20:16:55.551983   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552452   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:55.552365   65019 retry.go:31] will retry after 1.327634108s: waiting for machine to come up
	I1009 20:16:56.881693   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882111   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882143   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:56.882061   65019 retry.go:31] will retry after 1.817770667s: waiting for machine to come up
	I1009 20:16:54.208830   63744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.001963207s)
	I1009 20:16:54.208858   63744 crio.go:469] duration metric: took 2.002072256s to extract the tarball
	I1009 20:16:54.208866   63744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:16:54.244727   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:54.287243   63744 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:16:54.287271   63744 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:16:54.287280   63744 kubeadm.go:934] updating node { 192.168.50.97 8443 v1.31.1 crio true true} ...
	I1009 20:16:54.287407   63744 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-503330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:16:54.287496   63744 ssh_runner.go:195] Run: crio config
	I1009 20:16:54.335950   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:16:54.335972   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:16:54.335992   63744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:16:54.336018   63744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.97 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-503330 NodeName:embed-certs-503330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:16:54.336171   63744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-503330"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:16:54.336230   63744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:16:54.346657   63744 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:16:54.346730   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:16:54.356150   63744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:16:54.372246   63744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:16:54.388168   63744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1009 20:16:54.404739   63744 ssh_runner.go:195] Run: grep 192.168.50.97	control-plane.minikube.internal$ /etc/hosts
	I1009 20:16:54.408599   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:54.421033   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:54.554324   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:54.571469   63744 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330 for IP: 192.168.50.97
	I1009 20:16:54.571493   63744 certs.go:194] generating shared ca certs ...
	I1009 20:16:54.571514   63744 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:54.571702   63744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:16:54.571755   63744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:16:54.571768   63744 certs.go:256] generating profile certs ...
	I1009 20:16:54.571890   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/client.key
	I1009 20:16:54.571977   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key.3496edbe
	I1009 20:16:54.572035   63744 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key
	I1009 20:16:54.572172   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:16:54.572212   63744 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:16:54.572225   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:16:54.572263   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:16:54.572295   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:16:54.572339   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:16:54.572395   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:54.573111   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:16:54.613670   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:16:54.647116   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:16:54.683687   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:16:54.722221   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:16:54.759929   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:16:54.787802   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:16:54.810019   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:16:54.832805   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:16:54.854772   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:16:54.878414   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:16:54.901850   63744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:16:54.918260   63744 ssh_runner.go:195] Run: openssl version
	I1009 20:16:54.923815   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:16:54.934350   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938733   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938799   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.944372   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:16:54.954950   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:16:54.965726   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970021   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970081   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.975568   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:16:54.986392   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:16:54.996852   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001051   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001096   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.006579   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:16:55.017264   63744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:16:55.021893   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:16:55.027729   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:16:55.033714   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:16:55.039641   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:16:55.045236   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:16:55.050855   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:16:55.056748   63744 kubeadm.go:392] StartCluster: {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:55.056833   63744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:16:55.056882   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.098936   63744 cri.go:89] found id: ""
	I1009 20:16:55.099014   63744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:16:55.109556   63744 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:16:55.109579   63744 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:16:55.109625   63744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:16:55.119379   63744 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:16:55.120348   63744 kubeconfig.go:125] found "embed-certs-503330" server: "https://192.168.50.97:8443"
	I1009 20:16:55.122330   63744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:16:55.131900   63744 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.97
	I1009 20:16:55.131927   63744 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:16:55.131936   63744 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:16:55.131978   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.171019   63744 cri.go:89] found id: ""
	I1009 20:16:55.171090   63744 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:16:55.188501   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:16:55.198221   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:16:55.198244   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:16:55.198304   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:16:55.207327   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:16:55.207371   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:16:55.216598   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:16:55.226558   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:16:55.226618   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:16:55.237485   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.246557   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:16:55.246604   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.257542   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:16:55.267040   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:16:55.267116   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:16:55.276472   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:16:55.285774   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:55.402155   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.327441   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.559638   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.623281   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.682538   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:16:56.682638   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.183012   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.682740   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.183107   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.702309   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702787   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702821   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:58.702713   65019 retry.go:31] will retry after 1.927245136s: waiting for machine to come up
	I1009 20:17:00.631448   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631884   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:00.631828   65019 retry.go:31] will retry after 2.288888745s: waiting for machine to come up
	I1009 20:16:58.683664   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.717388   63744 api_server.go:72] duration metric: took 2.034851204s to wait for apiserver process to appear ...
	I1009 20:16:58.717417   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:16:58.717441   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:16:58.717988   63744 api_server.go:269] stopped: https://192.168.50.97:8443/healthz: Get "https://192.168.50.97:8443/healthz": dial tcp 192.168.50.97:8443: connect: connection refused
	I1009 20:16:59.217777   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.473119   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.473153   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.473179   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.549848   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.549880   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.718137   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.722540   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:01.722571   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.217856   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.222606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:02.222638   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.718198   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.723729   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:17:02.729552   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:02.729582   63744 api_server.go:131] duration metric: took 4.01215752s to wait for apiserver health ...
	I1009 20:17:02.729594   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:17:02.729603   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:02.731426   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:02.732669   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:02.743408   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:02.762443   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:02.774604   63744 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:02.774647   63744 system_pods.go:61] "coredns-7c65d6cfc9-df57g" [6d86b5f4-6ab2-4313-9247-f2766bb2cd17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:02.774666   63744 system_pods.go:61] "etcd-embed-certs-503330" [c3d2f07e-3ea7-41ae-9247-0c79e5aeef7f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:02.774685   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [917f81d6-e4fb-41fe-8051-a1c645e35af8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:02.774693   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [d12d9ad5-e80a-4745-ae2d-3f24965de4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:02.774706   63744 system_pods.go:61] "kube-proxy-dsh65" [f027d12a-f0b8-45a9-a73d-1afdd80ef7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:17:02.774718   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [a42cdb71-099c-40a3-b474-ced8659ae391] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:02.774736   63744 system_pods.go:61] "metrics-server-6867b74b74-6z7jj" [58aa0ad3-3210-4722-a579-392688c91bae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:02.774752   63744 system_pods.go:61] "storage-provisioner" [3b0ab765-5bd6-44ac-866e-1c1168ad8ed9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:02.774765   63744 system_pods.go:74] duration metric: took 12.298201ms to wait for pod list to return data ...
	I1009 20:17:02.774777   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:02.785857   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:02.785882   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:02.785892   63744 node_conditions.go:105] duration metric: took 11.107216ms to run NodePressure ...
	I1009 20:17:02.785910   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:03.147197   63744 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150727   63744 kubeadm.go:739] kubelet initialised
	I1009 20:17:03.150746   63744 kubeadm.go:740] duration metric: took 3.5247ms waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150753   63744 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:03.155171   63744 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.160022   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160045   63744 pod_ready.go:82] duration metric: took 4.856483ms for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.160053   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160059   63744 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.165155   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165176   63744 pod_ready.go:82] duration metric: took 5.104415ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.165184   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165190   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.170669   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170684   63744 pod_ready.go:82] duration metric: took 5.48497ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.170691   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170697   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.175025   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175039   63744 pod_ready.go:82] duration metric: took 4.333372ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.175047   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175052   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:02.923370   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923752   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923780   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:02.923727   65019 retry.go:31] will retry after 2.87724378s: waiting for machine to come up
	I1009 20:17:05.803251   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803748   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803774   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:05.803698   65019 retry.go:31] will retry after 5.592307609s: waiting for machine to come up
	I1009 20:17:03.565676   63744 pod_ready.go:93] pod "kube-proxy-dsh65" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:03.565703   63744 pod_ready.go:82] duration metric: took 390.643175ms for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.565715   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:05.574374   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:08.072406   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:11.397365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397813   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Found IP for machine: 192.168.72.134
	I1009 20:17:11.397834   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has current primary IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397840   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserving static IP address...
	I1009 20:17:11.398220   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.398246   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | skip adding static IP to network mk-default-k8s-diff-port-733270 - found existing host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"}
	I1009 20:17:11.398259   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserved static IP address: 192.168.72.134
	I1009 20:17:11.398274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for SSH to be available...
	I1009 20:17:11.398291   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Getting to WaitForSSH function...
	I1009 20:17:11.400217   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400530   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.400553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400649   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH client type: external
	I1009 20:17:11.400675   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa (-rw-------)
	I1009 20:17:11.400710   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:11.400729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | About to run SSH command:
	I1009 20:17:11.400744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | exit 0
	I1009 20:17:11.526822   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:11.527202   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetConfigRaw
	I1009 20:17:11.527838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.530365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530702   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.530729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530978   64109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/config.json ...
	I1009 20:17:11.531187   64109 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:11.531204   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:11.531388   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.533307   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533646   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.533671   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533778   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.533949   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534088   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534181   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.534308   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.534521   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.534535   64109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:11.643315   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:11.643341   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643558   64109 buildroot.go:166] provisioning hostname "default-k8s-diff-port-733270"
	I1009 20:17:11.643580   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643746   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.646369   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646741   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.646771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646919   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.647087   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647249   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647363   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.647495   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.647698   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.647723   64109 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733270 && echo "default-k8s-diff-port-733270" | sudo tee /etc/hostname
	I1009 20:17:11.774094   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733270
	
	I1009 20:17:11.774129   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.776945   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.777318   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777450   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.777637   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777807   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777942   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.778077   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.778265   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.778290   64109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:11.899636   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:11.899666   64109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:11.899712   64109 buildroot.go:174] setting up certificates
	I1009 20:17:11.899729   64109 provision.go:84] configureAuth start
	I1009 20:17:11.899745   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.900007   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.902313   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902620   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.902647   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902783   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.904665   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.904999   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.905028   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.905121   64109 provision.go:143] copyHostCerts
	I1009 20:17:11.905194   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:11.905208   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:11.905274   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:11.905389   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:11.905403   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:11.905433   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:11.905506   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:11.905515   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:11.905543   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:11.905658   64109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733270 san=[127.0.0.1 192.168.72.134 default-k8s-diff-port-733270 localhost minikube]
	I1009 20:17:12.089469   64109 provision.go:177] copyRemoteCerts
	I1009 20:17:12.089537   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:12.089563   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.091929   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092210   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.092242   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092431   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.092601   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.092729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.092822   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.177787   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 20:17:12.201400   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:17:12.225416   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:12.247777   64109 provision.go:87] duration metric: took 348.034794ms to configureAuth
	I1009 20:17:12.247801   64109 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:12.247989   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:12.248077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.250489   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.250849   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.250880   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.251083   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.251281   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251515   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.251786   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.251973   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.251995   64109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:12.475656   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:12.475687   64109 machine.go:96] duration metric: took 944.487945ms to provisionDockerMachine
	I1009 20:17:12.475701   64109 start.go:293] postStartSetup for "default-k8s-diff-port-733270" (driver="kvm2")
	I1009 20:17:12.475714   64109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:12.475730   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.476033   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:12.476070   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.478464   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478809   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.478838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.479077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.479198   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.479330   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.719812   64287 start.go:364] duration metric: took 3m28.002029987s to acquireMachinesLock for "old-k8s-version-169021"
	I1009 20:17:12.719868   64287 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:12.719874   64287 fix.go:54] fixHost starting: 
	I1009 20:17:12.720288   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:12.720338   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:12.736888   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I1009 20:17:12.737330   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:12.737796   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:17:12.737818   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:12.738095   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:12.738284   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:12.738407   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetState
	I1009 20:17:12.740019   64287 fix.go:112] recreateIfNeeded on old-k8s-version-169021: state=Stopped err=<nil>
	I1009 20:17:12.740056   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	W1009 20:17:12.740218   64287 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:12.741971   64287 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-169021" ...
	I1009 20:17:10.572038   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:13.072273   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:12.566216   64109 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:12.570733   64109 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:12.570754   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:12.570811   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:12.570894   64109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:12.571002   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:12.580485   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:12.604494   64109 start.go:296] duration metric: took 128.779636ms for postStartSetup
	I1009 20:17:12.604528   64109 fix.go:56] duration metric: took 23.304740697s for fixHost
	I1009 20:17:12.604547   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.607253   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607579   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.607611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607762   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.607941   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608085   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608190   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.608315   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.608524   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.608542   64109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:12.719641   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505032.674262019
	
	I1009 20:17:12.719663   64109 fix.go:216] guest clock: 1728505032.674262019
	I1009 20:17:12.719672   64109 fix.go:229] Guest: 2024-10-09 20:17:12.674262019 +0000 UTC Remote: 2024-10-09 20:17:12.604532015 +0000 UTC m=+215.141542026 (delta=69.730004ms)
	I1009 20:17:12.719734   64109 fix.go:200] guest clock delta is within tolerance: 69.730004ms
	I1009 20:17:12.719742   64109 start.go:83] releasing machines lock for "default-k8s-diff-port-733270", held for 23.419984544s
	I1009 20:17:12.719771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.720009   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:12.722908   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.723308   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723449   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724041   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724196   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724276   64109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:12.724314   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.724356   64109 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:12.724376   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.726747   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727051   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727098   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727176   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727264   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727555   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.727586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727622   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727681   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.727738   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727865   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727993   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.728110   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.808408   64109 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:12.835630   64109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:12.989949   64109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:12.995824   64109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:12.995893   64109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:13.011680   64109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:13.011707   64109 start.go:495] detecting cgroup driver to use...
	I1009 20:17:13.011774   64109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:13.027110   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:13.040097   64109 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:13.040198   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:13.054001   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:13.068380   64109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:13.190626   64109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:13.367857   64109 docker.go:233] disabling docker service ...
	I1009 20:17:13.367921   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:13.385929   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:13.403253   64109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:13.528117   64109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:13.663611   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:13.679242   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:13.699707   64109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:13.699775   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.710685   64109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:13.710749   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.722116   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.732987   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.744601   64109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:13.755998   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.768759   64109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.788295   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.798784   64109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:13.808745   64109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:13.808810   64109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:13.823798   64109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:13.834854   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:13.959977   64109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:14.071531   64109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:14.071613   64109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:14.077348   64109 start.go:563] Will wait 60s for crictl version
	I1009 20:17:14.077412   64109 ssh_runner.go:195] Run: which crictl
	I1009 20:17:14.081272   64109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:14.120851   64109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:14.120951   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.148588   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.178661   64109 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:12.743057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .Start
	I1009 20:17:12.743249   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring networks are active...
	I1009 20:17:12.743940   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network default is active
	I1009 20:17:12.744263   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network mk-old-k8s-version-169021 is active
	I1009 20:17:12.744639   64287 main.go:141] libmachine: (old-k8s-version-169021) Getting domain xml...
	I1009 20:17:12.745331   64287 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:17:14.013679   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting to get IP...
	I1009 20:17:14.014647   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.015019   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.015101   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.015007   65185 retry.go:31] will retry after 236.047931ms: waiting for machine to come up
	I1009 20:17:14.252239   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.252610   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.252636   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.252568   65185 retry.go:31] will retry after 325.864911ms: waiting for machine to come up
	I1009 20:17:14.580315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.580940   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.580965   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.580878   65185 retry.go:31] will retry after 366.421043ms: waiting for machine to come up
	I1009 20:17:14.179897   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:14.183174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183497   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:14.183529   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183702   64109 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:14.187948   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:14.201218   64109 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:14.201341   64109 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:14.201381   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:14.237137   64109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:14.237210   64109 ssh_runner.go:195] Run: which lz4
	I1009 20:17:14.241492   64109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:14.246237   64109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:14.246270   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:17:15.633127   64109 crio.go:462] duration metric: took 1.391666515s to copy over tarball
	I1009 20:17:15.633221   64109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:15.073427   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.085878   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.574480   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:17.574502   63744 pod_ready.go:82] duration metric: took 14.00878017s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:17.574511   63744 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:14.949258   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.949766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.949800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.949726   65185 retry.go:31] will retry after 498.276481ms: waiting for machine to come up
	I1009 20:17:15.450160   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:15.450601   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:15.450635   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:15.450548   65185 retry.go:31] will retry after 742.118922ms: waiting for machine to come up
	I1009 20:17:16.194707   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.195193   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.195232   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.195137   65185 retry.go:31] will retry after 583.713263ms: waiting for machine to come up
	I1009 20:17:16.780844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.781277   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.781302   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.781215   65185 retry.go:31] will retry after 936.435146ms: waiting for machine to come up
	I1009 20:17:17.719083   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:17.719558   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:17.719588   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:17.719503   65185 retry.go:31] will retry after 1.046822117s: waiting for machine to come up
	I1009 20:17:18.768306   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:18.768844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:18.768872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:18.768798   65185 retry.go:31] will retry after 1.362599959s: waiting for machine to come up
	I1009 20:17:17.738682   64109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10542583s)
	I1009 20:17:17.738724   64109 crio.go:469] duration metric: took 2.105568099s to extract the tarball
	I1009 20:17:17.738733   64109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:17.779611   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:17.834267   64109 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:17:17.834291   64109 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:17:17.834299   64109 kubeadm.go:934] updating node { 192.168.72.134 8444 v1.31.1 crio true true} ...
	I1009 20:17:17.834384   64109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-733270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:17.834449   64109 ssh_runner.go:195] Run: crio config
	I1009 20:17:17.879236   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:17.879265   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:17.879286   64109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:17.879306   64109 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733270 NodeName:default-k8s-diff-port-733270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:17:17.879467   64109 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:17.879531   64109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:17:17.889847   64109 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:17.889945   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:17.899292   64109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1009 20:17:17.915656   64109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:17.931802   64109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1009 20:17:17.949046   64109 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:17.953042   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:17.966741   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:18.099697   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:18.120535   64109 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270 for IP: 192.168.72.134
	I1009 20:17:18.120555   64109 certs.go:194] generating shared ca certs ...
	I1009 20:17:18.120570   64109 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:18.120700   64109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:18.120734   64109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:18.120743   64109 certs.go:256] generating profile certs ...
	I1009 20:17:18.120813   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.key
	I1009 20:17:18.120867   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key.a935be89
	I1009 20:17:18.120910   64109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key
	I1009 20:17:18.121023   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:18.121053   64109 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:18.121065   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:18.121107   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:18.121131   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:18.121165   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:18.121217   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:18.121886   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:18.185147   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:18.221038   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:18.252242   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:18.295828   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 20:17:18.323898   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:18.348575   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:18.372580   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:18.396351   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:18.420726   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:18.444717   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:18.469594   64109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:18.485908   64109 ssh_runner.go:195] Run: openssl version
	I1009 20:17:18.492283   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:18.503167   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507900   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507952   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.513847   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:18.524101   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:18.534793   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539332   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539410   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.545077   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:18.555669   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:18.570727   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576515   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576585   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.582738   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:18.593855   64109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:18.598553   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:18.604755   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:18.611554   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:18.617835   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:18.623671   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:18.629288   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:18.634887   64109 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:18.634994   64109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:18.635040   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.676211   64109 cri.go:89] found id: ""
	I1009 20:17:18.676309   64109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:18.686685   64109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:18.686706   64109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:18.686758   64109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:18.696573   64109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:18.697474   64109 kubeconfig.go:125] found "default-k8s-diff-port-733270" server: "https://192.168.72.134:8444"
	I1009 20:17:18.699424   64109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:18.708661   64109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.134
	I1009 20:17:18.708693   64109 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:18.708705   64109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:18.708756   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.747781   64109 cri.go:89] found id: ""
	I1009 20:17:18.747852   64109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:18.765293   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:18.776296   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:18.776315   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:18.776363   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:17:18.785075   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:18.785132   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:18.794089   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:17:18.802663   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:18.802710   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:18.811834   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.820562   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:18.820611   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.829603   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:17:18.838162   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:18.838214   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:18.847131   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:18.856597   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:18.963398   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.093311   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.129878409s)
	I1009 20:17:20.093347   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.311144   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.405808   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.500323   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:20.500417   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.001420   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.501473   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.000842   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:19.581480   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:22.081200   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:20.133416   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:20.133841   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:20.133872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:20.133789   65185 retry.go:31] will retry after 1.900366713s: waiting for machine to come up
	I1009 20:17:22.036076   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:22.036465   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:22.036499   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:22.036421   65185 retry.go:31] will retry after 2.419471311s: waiting for machine to come up
	I1009 20:17:24.458015   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:24.458410   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:24.458441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:24.458379   65185 retry.go:31] will retry after 2.284501028s: waiting for machine to come up
	I1009 20:17:22.500576   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.517320   64109 api_server.go:72] duration metric: took 2.016990608s to wait for apiserver process to appear ...
	I1009 20:17:22.517349   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:17:22.517371   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.392466   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.392500   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.392516   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.432214   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.432243   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.518413   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.537284   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:25.537328   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.017494   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.022548   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.022581   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.518206   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.523173   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.523198   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:27.017735   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:27.022557   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:17:27.031462   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:27.031486   64109 api_server.go:131] duration metric: took 4.514131072s to wait for apiserver health ...
	I1009 20:17:27.031494   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:27.031500   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:27.033659   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:27.035055   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:27.045141   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:27.062887   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:27.070777   64109 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:27.070810   64109 system_pods.go:61] "coredns-7c65d6cfc9-vz7nx" [c9474b15-ac87-4b81-a239-6f4f3563c708] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:27.070820   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [ef686f1a-21a5-4058-b8ca-6e719415d778] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:27.070833   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [60a13042-6ddb-41c9-993b-a351aad64ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:27.070842   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [d876ca14-7014-4891-965a-83cadccc4416] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:27.070848   64109 system_pods.go:61] "kube-proxy-zr4bl" [4545b380-2d43-415a-97aa-c245a19d8aff] Running
	I1009 20:17:27.070859   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [d2ff89d7-03cf-430c-aa64-278d800d7fa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:27.070870   64109 system_pods.go:61] "metrics-server-6867b74b74-8p24l" [133ac2dc-236a-4ad6-886a-33b132ff5b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:27.070890   64109 system_pods.go:61] "storage-provisioner" [b82a4bd2-62d3-4eee-b17c-c0ae22b2bd3b] Running
	I1009 20:17:27.070902   64109 system_pods.go:74] duration metric: took 7.993626ms to wait for pod list to return data ...
	I1009 20:17:27.070914   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:27.074265   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:27.074290   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:27.074301   64109 node_conditions.go:105] duration metric: took 3.379591ms to run NodePressure ...
	I1009 20:17:27.074327   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:27.337687   64109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342418   64109 kubeadm.go:739] kubelet initialised
	I1009 20:17:27.342438   64109 kubeadm.go:740] duration metric: took 4.72219ms waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342446   64109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:27.347265   64109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.351569   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351587   64109 pod_ready.go:82] duration metric: took 4.298933ms for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.351595   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351600   64109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.355636   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355657   64109 pod_ready.go:82] duration metric: took 4.050576ms for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.355666   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355672   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.359739   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359758   64109 pod_ready.go:82] duration metric: took 4.080099ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.359767   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359773   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.466469   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466514   64109 pod_ready.go:82] duration metric: took 106.729243ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.466530   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466546   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:24.081959   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.581477   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.744084   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:26.744443   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:26.744468   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:26.744421   65185 retry.go:31] will retry after 2.772640247s: waiting for machine to come up
	I1009 20:17:29.519542   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:29.519877   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:29.519897   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:29.519854   65185 retry.go:31] will retry after 5.534511505s: waiting for machine to come up
	I1009 20:17:27.866362   64109 pod_ready.go:93] pod "kube-proxy-zr4bl" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:27.866389   64109 pod_ready.go:82] duration metric: took 399.82454ms for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.866401   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:29.872414   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.872979   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:29.081836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.580784   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.520055   63427 start.go:364] duration metric: took 1m0.914393022s to acquireMachinesLock for "no-preload-480205"
	I1009 20:17:36.520112   63427 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:36.520120   63427 fix.go:54] fixHost starting: 
	I1009 20:17:36.520550   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:36.520590   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:36.541113   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1009 20:17:36.541505   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:36.542133   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:17:36.542161   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:36.542522   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:36.542701   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:36.542849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:17:36.544749   63427 fix.go:112] recreateIfNeeded on no-preload-480205: state=Stopped err=<nil>
	I1009 20:17:36.544774   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	W1009 20:17:36.544962   63427 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:36.546948   63427 out.go:177] * Restarting existing kvm2 VM for "no-preload-480205" ...
	I1009 20:17:34.373083   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.373497   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:35.056703   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057338   64287 main.go:141] libmachine: (old-k8s-version-169021) Found IP for machine: 192.168.61.119
	I1009 20:17:35.057370   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has current primary IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057378   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserving static IP address...
	I1009 20:17:35.057996   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.058019   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserved static IP address: 192.168.61.119
	I1009 20:17:35.058036   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | skip adding static IP to network mk-old-k8s-version-169021 - found existing host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"}
	I1009 20:17:35.058052   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Getting to WaitForSSH function...
	I1009 20:17:35.058069   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting for SSH to be available...
	I1009 20:17:35.060324   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060560   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.060586   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060678   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH client type: external
	I1009 20:17:35.060702   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa (-rw-------)
	I1009 20:17:35.060735   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:35.060750   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | About to run SSH command:
	I1009 20:17:35.060766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | exit 0
	I1009 20:17:35.183369   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:35.183732   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:17:35.184294   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.186404   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186691   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.186728   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186912   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:17:35.187139   64287 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:35.187158   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:35.187361   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.189504   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189784   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.189814   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189904   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.190057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190169   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190309   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.190422   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.190610   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.190626   64287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:35.295510   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:35.295543   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295782   64287 buildroot.go:166] provisioning hostname "old-k8s-version-169021"
	I1009 20:17:35.295804   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295994   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.298548   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.298930   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.298964   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.299120   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.299266   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299418   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299547   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.299737   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.299899   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.299912   64287 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169021 && echo "old-k8s-version-169021" | sudo tee /etc/hostname
	I1009 20:17:35.426217   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169021
	
	I1009 20:17:35.426246   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.428993   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.429348   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429554   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.429728   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.429885   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.430012   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.430164   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.430365   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.430391   64287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:35.544070   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:35.544098   64287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:35.544136   64287 buildroot.go:174] setting up certificates
	I1009 20:17:35.544146   64287 provision.go:84] configureAuth start
	I1009 20:17:35.544155   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.544420   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.547109   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547419   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.547451   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547618   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.549441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549724   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.549757   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549894   64287 provision.go:143] copyHostCerts
	I1009 20:17:35.549945   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:35.549955   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:35.550007   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:35.550109   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:35.550119   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:35.550139   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:35.550201   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:35.550207   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:35.550224   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:35.550274   64287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169021 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-169021]
	I1009 20:17:35.892413   64287 provision.go:177] copyRemoteCerts
	I1009 20:17:35.892470   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:35.892492   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.894921   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895231   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.895262   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895409   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.895585   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.895750   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.895870   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:35.978537   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:36.003667   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:17:36.029724   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:36.053321   64287 provision.go:87] duration metric: took 509.163583ms to configureAuth
	I1009 20:17:36.053347   64287 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:36.053517   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:17:36.053589   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.056411   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.056740   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.056769   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.057023   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.057214   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057396   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057533   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.057684   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.057847   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.057862   64287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:36.281284   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:36.281316   64287 machine.go:96] duration metric: took 1.094164441s to provisionDockerMachine
	I1009 20:17:36.281327   64287 start.go:293] postStartSetup for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:17:36.281339   64287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:36.281386   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.281686   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:36.281711   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.284445   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.284825   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284990   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.285132   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.285255   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.285405   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.370146   64287 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:36.374951   64287 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:36.374972   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:36.375040   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:36.375158   64287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:36.375286   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:36.384857   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:36.407811   64287 start.go:296] duration metric: took 126.472907ms for postStartSetup
	I1009 20:17:36.407852   64287 fix.go:56] duration metric: took 23.68797707s for fixHost
	I1009 20:17:36.407875   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.410584   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.410949   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.410979   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.411118   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.411292   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411461   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411593   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.411768   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.411943   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.411966   64287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:36.519849   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505056.472929841
	
	I1009 20:17:36.519877   64287 fix.go:216] guest clock: 1728505056.472929841
	I1009 20:17:36.519887   64287 fix.go:229] Guest: 2024-10-09 20:17:36.472929841 +0000 UTC Remote: 2024-10-09 20:17:36.407856716 +0000 UTC m=+231.827419064 (delta=65.073125ms)
	I1009 20:17:36.519944   64287 fix.go:200] guest clock delta is within tolerance: 65.073125ms
	I1009 20:17:36.519956   64287 start.go:83] releasing machines lock for "old-k8s-version-169021", held for 23.800110205s
	I1009 20:17:36.520000   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.520321   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:36.523296   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523653   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.523701   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523890   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524453   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524658   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524781   64287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:36.524822   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.524855   64287 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:36.524883   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.527948   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528030   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528336   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528362   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528389   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528414   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528670   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528681   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528874   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.528880   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.529031   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529035   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529170   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.529191   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.634262   64287 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:36.640126   64287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:36.794481   64287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:36.801536   64287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:36.801615   64287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:36.825211   64287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:36.825237   64287 start.go:495] detecting cgroup driver to use...
	I1009 20:17:36.825299   64287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:36.842016   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:36.861052   64287 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:36.861112   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:36.878185   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:36.892044   64287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:37.010989   64287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:37.181313   64287 docker.go:233] disabling docker service ...
	I1009 20:17:37.181373   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:37.201726   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:37.218403   64287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:37.330869   64287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:37.458670   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:37.474832   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:37.496062   64287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:17:37.496111   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.509926   64287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:37.509984   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.527671   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.543857   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.554871   64287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:37.566057   64287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:37.578675   64287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:37.578757   64287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:37.593475   64287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:37.608210   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:37.756273   64287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:37.857693   64287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:37.857759   64287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:37.863522   64287 start.go:563] Will wait 60s for crictl version
	I1009 20:17:37.863561   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:37.868216   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:37.908445   64287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:37.908519   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.939400   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.971447   64287 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:17:36.548231   63427 main.go:141] libmachine: (no-preload-480205) Calling .Start
	I1009 20:17:36.548387   63427 main.go:141] libmachine: (no-preload-480205) Ensuring networks are active...
	I1009 20:17:36.549099   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network default is active
	I1009 20:17:36.549384   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network mk-no-preload-480205 is active
	I1009 20:17:36.549760   63427 main.go:141] libmachine: (no-preload-480205) Getting domain xml...
	I1009 20:17:36.550533   63427 main.go:141] libmachine: (no-preload-480205) Creating domain...
	I1009 20:17:37.839932   63427 main.go:141] libmachine: (no-preload-480205) Waiting to get IP...
	I1009 20:17:37.840843   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:37.841295   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:37.841405   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:37.841286   65353 retry.go:31] will retry after 306.803832ms: waiting for machine to come up
	I1009 20:17:33.581531   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.080661   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:38.083154   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:37.972687   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:37.975928   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976352   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:37.976382   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976637   64287 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:37.980809   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:37.993206   64287 kubeadm.go:883] updating cluster {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:37.993359   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:17:37.993402   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:38.043755   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:38.043813   64287 ssh_runner.go:195] Run: which lz4
	I1009 20:17:38.048189   64287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:38.052553   64287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:38.052584   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:17:38.374526   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.376238   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.874242   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:40.874269   64109 pod_ready.go:82] duration metric: took 13.007861108s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:40.874282   64109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:38.149878   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.150291   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.150317   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.150240   65353 retry.go:31] will retry after 331.657929ms: waiting for machine to come up
	I1009 20:17:38.483773   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.484236   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.484259   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.484184   65353 retry.go:31] will retry after 320.466882ms: waiting for machine to come up
	I1009 20:17:38.806862   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.807342   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.807370   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.807304   65353 retry.go:31] will retry after 515.558491ms: waiting for machine to come up
	I1009 20:17:39.324105   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:39.324656   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:39.324687   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:39.324624   65353 retry.go:31] will retry after 742.624052ms: waiting for machine to come up
	I1009 20:17:40.068871   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.069333   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.069361   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.069242   65353 retry.go:31] will retry after 627.591329ms: waiting for machine to come up
	I1009 20:17:40.698046   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.698539   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.698590   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.698482   65353 retry.go:31] will retry after 1.099340902s: waiting for machine to come up
	I1009 20:17:41.799879   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:41.800304   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:41.800334   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:41.800260   65353 retry.go:31] will retry after 954.068599ms: waiting for machine to come up
	I1009 20:17:42.756258   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:42.756730   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:42.756756   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:42.756692   65353 retry.go:31] will retry after 1.483165135s: waiting for machine to come up
	I1009 20:17:40.581834   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:42.583105   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:39.710338   64287 crio.go:462] duration metric: took 1.662187364s to copy over tarball
	I1009 20:17:39.710411   64287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:42.694067   64287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.983621241s)
	I1009 20:17:42.694097   64287 crio.go:469] duration metric: took 2.98372831s to extract the tarball
	I1009 20:17:42.694106   64287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:42.739749   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:42.782349   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:42.782374   64287 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:42.782447   64287 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.782474   64287 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.782512   64287 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.782544   64287 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:17:42.782549   64287 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.782732   64287 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.782486   64287 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.782788   64287 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.784992   64287 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.785024   64287 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.784995   64287 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.785000   64287 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.785007   64287 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.785070   64287 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:17:42.785030   64287 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.785471   64287 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.936283   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.937808   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.960488   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.971814   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:17:42.977796   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.004153   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.014701   64287 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:17:43.014748   64287 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.014795   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.025133   64287 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:17:43.025170   64287 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.025204   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086484   64287 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:17:43.086512   64287 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:17:43.086532   64287 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.086541   64287 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:17:43.086579   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086581   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.097814   64287 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:17:43.097859   64287 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.097909   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103497   64287 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:17:43.103532   64287 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.103548   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.103569   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103677   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.103745   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.103799   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.105640   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.203854   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.220635   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.220670   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.220793   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.232794   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.232901   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.232905   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.389992   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.390038   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.389991   64287 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:17:43.390081   64287 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.390097   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.390112   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.390166   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.390187   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.390247   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.475244   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:17:43.536485   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:17:43.536569   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.538738   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:17:43.538812   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:17:43.538863   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:17:43.538880   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.597357   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:17:43.597449   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.630702   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.668841   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:17:44.007657   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:44.151174   64287 cache_images.go:92] duration metric: took 1.368780539s to LoadCachedImages
	W1009 20:17:44.151263   64287 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1009 20:17:44.151285   64287 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1009 20:17:44.151432   64287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-169021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:44.151500   64287 ssh_runner.go:195] Run: crio config
	I1009 20:17:44.208126   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:17:44.208148   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:44.208165   64287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:44.208183   64287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169021 NodeName:old-k8s-version-169021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:17:44.208361   64287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-169021"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:44.208437   64287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:17:44.218743   64287 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:44.218813   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:44.228160   64287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 20:17:44.245304   64287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:44.262787   64287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:17:44.280742   64287 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:44.285502   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:44.299434   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:44.427216   64287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:44.445239   64287 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021 for IP: 192.168.61.119
	I1009 20:17:44.445262   64287 certs.go:194] generating shared ca certs ...
	I1009 20:17:44.445282   64287 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:44.445454   64287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:44.445516   64287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:44.445538   64287 certs.go:256] generating profile certs ...
	I1009 20:17:44.445663   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key
	I1009 20:17:44.445728   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192
	I1009 20:17:44.445780   64287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key
	I1009 20:17:44.445920   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:44.445961   64287 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:44.445976   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:44.446008   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:44.446041   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:44.446074   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:44.446130   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:44.446993   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:44.498205   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:44.525945   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:44.572216   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:44.614281   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:17:42.881058   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:45.654206   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.242356   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:44.242846   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:44.242873   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:44.242792   65353 retry.go:31] will retry after 1.589482004s: waiting for machine to come up
	I1009 20:17:45.834679   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:45.835135   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:45.835176   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:45.835093   65353 retry.go:31] will retry after 1.757206304s: waiting for machine to come up
	I1009 20:17:47.593468   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:47.593954   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:47.593987   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:47.593889   65353 retry.go:31] will retry after 2.938063418s: waiting for machine to come up
	I1009 20:17:45.082377   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:47.581271   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.661644   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:44.695246   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:44.719043   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:44.743825   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:44.768013   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:44.793698   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:44.819945   64287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:44.840340   64287 ssh_runner.go:195] Run: openssl version
	I1009 20:17:44.847883   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:44.858853   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863657   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863707   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.871190   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:44.885414   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:44.900030   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904894   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904958   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.912406   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:44.925128   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:44.936358   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940937   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940995   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.946995   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:44.958154   64287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:44.962846   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:44.968749   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:44.974659   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:44.980867   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:44.986827   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:44.992741   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:44.998932   64287 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:44.999030   64287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:44.999107   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.037766   64287 cri.go:89] found id: ""
	I1009 20:17:45.037847   64287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:45.050640   64287 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:45.050661   64287 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:45.050717   64287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:45.061420   64287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:45.062835   64287 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:17:45.063886   64287 kubeconfig.go:62] /home/jenkins/minikube-integration/19780-9412/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169021" cluster setting kubeconfig missing "old-k8s-version-169021" context setting]
	I1009 20:17:45.065224   64287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:45.137319   64287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:45.149285   64287 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1009 20:17:45.149318   64287 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:45.149331   64287 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:45.149386   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.191415   64287 cri.go:89] found id: ""
	I1009 20:17:45.191494   64287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:45.208982   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:45.219143   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:45.219166   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:45.219219   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:17:45.229113   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:45.229199   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:45.239745   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:17:45.249766   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:45.249844   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:45.260185   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.271441   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:45.271500   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.281343   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:17:45.291026   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:45.291094   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:45.301052   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:45.311369   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:45.520151   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.097892   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.359594   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.466328   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.574255   64287 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:46.574365   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.574634   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.074595   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.575187   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.074428   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.880869   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:49.881585   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.381306   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.535997   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:50.536376   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:50.536400   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:50.536340   65353 retry.go:31] will retry after 3.744305095s: waiting for machine to come up
	I1009 20:17:49.581868   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.080469   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:50.575160   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.075457   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.574838   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.075036   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.075071   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.575204   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.074552   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.574415   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.284206   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.284770   63427 main.go:141] libmachine: (no-preload-480205) Found IP for machine: 192.168.39.162
	I1009 20:17:54.284795   63427 main.go:141] libmachine: (no-preload-480205) Reserving static IP address...
	I1009 20:17:54.284809   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has current primary IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.285276   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.285315   63427 main.go:141] libmachine: (no-preload-480205) DBG | skip adding static IP to network mk-no-preload-480205 - found existing host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"}
	I1009 20:17:54.285330   63427 main.go:141] libmachine: (no-preload-480205) Reserved static IP address: 192.168.39.162
	I1009 20:17:54.285344   63427 main.go:141] libmachine: (no-preload-480205) Waiting for SSH to be available...
	I1009 20:17:54.285356   63427 main.go:141] libmachine: (no-preload-480205) DBG | Getting to WaitForSSH function...
	I1009 20:17:54.287561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287809   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.287838   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287920   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH client type: external
	I1009 20:17:54.287947   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa (-rw-------)
	I1009 20:17:54.287988   63427 main.go:141] libmachine: (no-preload-480205) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:54.288001   63427 main.go:141] libmachine: (no-preload-480205) DBG | About to run SSH command:
	I1009 20:17:54.288014   63427 main.go:141] libmachine: (no-preload-480205) DBG | exit 0
	I1009 20:17:54.414835   63427 main.go:141] libmachine: (no-preload-480205) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:54.415251   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetConfigRaw
	I1009 20:17:54.415965   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.418617   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.418968   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.418992   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.419252   63427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/config.json ...
	I1009 20:17:54.419452   63427 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:54.419470   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:54.419664   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.421796   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422088   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.422120   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422233   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.422406   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422550   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422839   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.423013   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.423242   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.423254   63427 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:54.531462   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:54.531497   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531718   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:17:54.531744   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531956   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.534433   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534788   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.534816   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.535138   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535286   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535418   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.535601   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.535774   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.535785   63427 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-480205 && echo "no-preload-480205" | sudo tee /etc/hostname
	I1009 20:17:54.659155   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-480205
	
	I1009 20:17:54.659228   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.661958   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662288   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.662313   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662511   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.662681   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662842   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662987   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.663179   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.663354   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.663370   63427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-480205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-480205/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-480205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:54.779856   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:54.779881   63427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:54.779916   63427 buildroot.go:174] setting up certificates
	I1009 20:17:54.779926   63427 provision.go:84] configureAuth start
	I1009 20:17:54.779935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.780180   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.782673   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783013   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.783045   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783171   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.785450   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785780   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.785807   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785945   63427 provision.go:143] copyHostCerts
	I1009 20:17:54.786024   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:54.786041   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:54.786107   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:54.786282   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:54.786294   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:54.786327   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:54.786402   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:54.786412   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:54.786439   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:54.786503   63427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.no-preload-480205 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-480205]
	I1009 20:17:54.929212   63427 provision.go:177] copyRemoteCerts
	I1009 20:17:54.929265   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:54.929292   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.931970   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932355   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.932402   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932506   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.932693   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.932849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.932979   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.017690   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:55.042746   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:17:55.066760   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:55.094790   63427 provision.go:87] duration metric: took 314.853512ms to configureAuth
	I1009 20:17:55.094830   63427 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:55.095022   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:55.095125   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.097730   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098041   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.098078   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098257   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.098452   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098647   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098764   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.098926   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.099111   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.099129   63427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:55.325505   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:55.325552   63427 machine.go:96] duration metric: took 906.085773ms to provisionDockerMachine
	I1009 20:17:55.325565   63427 start.go:293] postStartSetup for "no-preload-480205" (driver="kvm2")
	I1009 20:17:55.325576   63427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:55.325596   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.325884   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:55.325911   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.328326   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328595   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.328622   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.328920   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.329086   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.329197   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.413322   63427 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:55.417428   63427 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:55.417451   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:55.417531   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:55.417634   63427 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:55.417758   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:55.426893   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:55.451335   63427 start.go:296] duration metric: took 125.757549ms for postStartSetup
	I1009 20:17:55.451372   63427 fix.go:56] duration metric: took 18.931252408s for fixHost
	I1009 20:17:55.451395   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.453854   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454177   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.454222   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454403   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.454581   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454734   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454872   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.455026   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.455241   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.455254   63427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:55.564201   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505075.515960663
	
	I1009 20:17:55.564224   63427 fix.go:216] guest clock: 1728505075.515960663
	I1009 20:17:55.564232   63427 fix.go:229] Guest: 2024-10-09 20:17:55.515960663 +0000 UTC Remote: 2024-10-09 20:17:55.451376872 +0000 UTC m=+362.436821917 (delta=64.583791ms)
	I1009 20:17:55.564249   63427 fix.go:200] guest clock delta is within tolerance: 64.583791ms
	I1009 20:17:55.564254   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 19.044164758s
	I1009 20:17:55.564274   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.564496   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:55.567139   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567524   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.567561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567654   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568134   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568307   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568372   63427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:55.568415   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.568499   63427 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:55.568524   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.571019   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571293   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571450   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571475   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571592   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571724   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571746   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.571897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571898   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572039   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.572048   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.572151   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572272   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.651437   63427 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:55.678289   63427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:55.826507   63427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:55.832338   63427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:55.832394   63427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:55.849232   63427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:55.849252   63427 start.go:495] detecting cgroup driver to use...
	I1009 20:17:55.849312   63427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:55.865490   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:55.880814   63427 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:55.880881   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:55.895380   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:55.911341   63427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:56.029690   63427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:56.206998   63427 docker.go:233] disabling docker service ...
	I1009 20:17:56.207078   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:56.223617   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:56.236949   63427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:56.357461   63427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:56.472412   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:56.486622   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:56.505189   63427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:56.505273   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.515661   63427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:56.515714   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.525699   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.535795   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.545864   63427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:56.555956   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.565864   63427 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.584950   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.596337   63427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:56.605878   63427 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:56.605945   63427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:56.618105   63427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:56.627474   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:56.763925   63427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:56.866705   63427 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:56.866766   63427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:56.871946   63427 start.go:563] Will wait 60s for crictl version
	I1009 20:17:56.871990   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:56.875978   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:56.920375   63427 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:56.920497   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.950584   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.983562   63427 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:54.883016   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:57.380454   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.984723   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:56.987544   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.987870   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:56.987896   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.988102   63427 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:56.992229   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:57.005052   63427 kubeadm.go:883] updating cluster {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:57.005203   63427 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:57.005261   63427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:57.048383   63427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:57.048405   63427 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:57.048449   63427 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.048493   63427 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.048528   63427 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.048551   63427 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1009 20:17:57.048554   63427 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.048460   63427 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.048669   63427 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.048543   63427 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049897   63427 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.049914   63427 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049917   63427 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.049899   63427 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.049966   63427 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.049968   63427 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1009 20:17:57.210906   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.216003   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.221539   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.238277   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.249962   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.251926   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.264094   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1009 20:17:57.278956   63427 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1009 20:17:57.279003   63427 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.279053   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.326574   63427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1009 20:17:57.326623   63427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.326667   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.356980   63427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1009 20:17:57.356999   63427 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1009 20:17:57.357024   63427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.357028   63427 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.357079   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.357082   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394166   63427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1009 20:17:57.394211   63427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.394308   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394202   63427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1009 20:17:57.394363   63427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.394409   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.504627   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.504669   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.504677   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.504795   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.504866   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.504808   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.653815   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.653864   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.653922   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.653938   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.653976   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.654008   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798466   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798526   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.798603   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.798638   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.798712   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.798725   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.919528   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1009 20:17:57.919602   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1009 20:17:57.919636   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.919668   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:17:57.923759   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1009 20:17:57.923835   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1009 20:17:57.923861   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1009 20:17:57.923841   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:17:57.923900   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:17:57.923908   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1009 20:17:57.923937   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:17:57.923979   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:17:57.933344   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1009 20:17:57.933364   63427 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.933384   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1009 20:17:57.933397   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.936970   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1009 20:17:57.937013   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1009 20:17:57.937014   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1009 20:17:57.937039   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1009 20:17:54.082018   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.581605   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:55.074932   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:55.575354   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.074536   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.575341   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.074580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.574737   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.074743   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.574712   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.074570   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.575178   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.381986   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.879741   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:58.234930   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.729993   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.796562811s)
	I1009 20:18:01.730032   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1009 20:18:01.730055   63427 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730053   63427 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.495090196s)
	I1009 20:18:01.730094   63427 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1009 20:18:01.730108   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730128   63427 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.730171   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:59.082693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.581215   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:00.075413   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:00.575344   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.074463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.574495   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.075077   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.074427   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.574544   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.075436   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.575477   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.881048   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.881675   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:03.709225   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.979095477s)
	I1009 20:18:03.709263   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1009 20:18:03.709270   63427 ssh_runner.go:235] Completed: which crictl: (1.979078895s)
	I1009 20:18:03.709293   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709328   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709331   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677348   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.967992224s)
	I1009 20:18:05.677442   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677451   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.968100259s)
	I1009 20:18:05.677472   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1009 20:18:05.677506   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.677576   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.717053   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:07.172029   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.454939952s)
	I1009 20:18:07.172088   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 20:18:07.172034   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.49443869s)
	I1009 20:18:07.172161   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1009 20:18:07.172184   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:07.172184   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:07.172274   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:03.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:06.082185   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.075031   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:05.574523   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.075121   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.575359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.074417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.574532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.075315   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.575052   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.075089   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.575013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.881820   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:09.882824   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:12.381749   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:08.827862   63427 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.655655014s)
	I1009 20:18:08.827897   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.655597185s)
	I1009 20:18:08.827906   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1009 20:18:08.827911   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1009 20:18:08.827943   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:08.828002   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:11.127762   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.299736339s)
	I1009 20:18:11.127795   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1009 20:18:11.127828   63427 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.127896   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.778998   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 20:18:11.779046   63427 cache_images.go:123] Successfully loaded all cached images
	I1009 20:18:11.779052   63427 cache_images.go:92] duration metric: took 14.730635989s to LoadCachedImages
	I1009 20:18:11.779086   63427 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.1 crio true true} ...
	I1009 20:18:11.779200   63427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-480205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:18:11.779290   63427 ssh_runner.go:195] Run: crio config
	I1009 20:18:11.823810   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:11.823835   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:11.823850   63427 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:18:11.823868   63427 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-480205 NodeName:no-preload-480205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:18:11.823998   63427 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-480205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:18:11.824053   63427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:18:11.834380   63427 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:18:11.834447   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:18:11.843217   63427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:18:11.860171   63427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:18:11.877082   63427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1009 20:18:11.894719   63427 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1009 20:18:11.898508   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:11.910913   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:12.036793   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:12.054850   63427 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205 for IP: 192.168.39.162
	I1009 20:18:12.054872   63427 certs.go:194] generating shared ca certs ...
	I1009 20:18:12.054891   63427 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:12.055079   63427 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:18:12.055135   63427 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:18:12.055147   63427 certs.go:256] generating profile certs ...
	I1009 20:18:12.055233   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.key
	I1009 20:18:12.055290   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key.d4bac337
	I1009 20:18:12.055346   63427 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key
	I1009 20:18:12.055484   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:18:12.055518   63427 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:18:12.055531   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:18:12.055563   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:18:12.055589   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:18:12.055622   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:18:12.055685   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:18:12.056362   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:18:12.098363   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:18:12.138215   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:18:12.163505   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:18:12.197000   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:18:12.226922   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:18:12.260018   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:18:12.283078   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:18:12.306681   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:18:12.329290   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:12.351909   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:18:12.374738   63427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:12.392628   63427 ssh_runner.go:195] Run: openssl version
	I1009 20:18:12.398243   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:18:12.408796   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413145   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413227   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.419056   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:12.429807   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:12.440638   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445248   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445304   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.450971   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:12.461763   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:18:12.472078   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476832   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476883   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.482732   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:12.493739   63427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:12.498128   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:18:12.504533   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:18:12.510838   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:18:12.517106   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:18:12.522836   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:18:12.528387   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:18:12.533860   63427 kubeadm.go:392] StartCluster: {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:12.533939   63427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:12.533974   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.573392   63427 cri.go:89] found id: ""
	I1009 20:18:12.573459   63427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:12.584594   63427 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:18:12.584615   63427 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:18:12.584660   63427 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:18:12.595656   63427 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:18:12.596797   63427 kubeconfig.go:125] found "no-preload-480205" server: "https://192.168.39.162:8443"
	I1009 20:18:12.598877   63427 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:18:12.608274   63427 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1009 20:18:12.608299   63427 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:18:12.608310   63427 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:18:12.608369   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.644925   63427 cri.go:89] found id: ""
	I1009 20:18:12.644992   63427 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:18:12.661468   63427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:18:12.671087   63427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:18:12.671107   63427 kubeadm.go:157] found existing configuration files:
	
	I1009 20:18:12.671152   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:18:12.679852   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:18:12.679915   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:18:12.688829   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:18:12.697279   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:18:12.697334   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:18:12.705785   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.714620   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:18:12.714657   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.722966   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:18:12.730999   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:18:12.731047   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:18:12.739970   63427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:18:12.748980   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:12.857890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:08.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:11.081976   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:10.075093   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.574417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.075214   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.574669   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.075388   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.575377   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.075087   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.574793   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.074494   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.574845   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.880777   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:17.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:13.727010   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:13.942433   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.021021   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.144829   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:18:14.144918   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.645875   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.145872   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.184998   63427 api_server.go:72] duration metric: took 1.040165861s to wait for apiserver process to appear ...
	I1009 20:18:15.185034   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:18:15.185059   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:15.185680   63427 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I1009 20:18:15.685984   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:13.581243   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:16.079884   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:18.081998   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:15.074778   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.575349   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.074510   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.074650   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.574725   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.075359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.575302   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.074611   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.575097   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.286022   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.286048   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.286066   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.311734   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.311764   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.685256   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.689903   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:18.689930   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.185432   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.191636   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:19.191661   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.685910   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.690518   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:18:19.696742   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:18:19.696769   63427 api_server.go:131] duration metric: took 4.511726583s to wait for apiserver health ...
	I1009 20:18:19.696777   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:19.696783   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:19.698684   63427 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:18:19.700003   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:18:19.712555   63427 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:18:19.731708   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:18:19.740770   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:18:19.740800   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:19.740808   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:19.740817   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:19.740823   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:19.740829   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:18:19.740835   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:19.740842   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:18:19.740848   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:18:19.740860   63427 system_pods.go:74] duration metric: took 9.132657ms to wait for pod list to return data ...
	I1009 20:18:19.740867   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:18:19.744292   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:18:19.744314   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:18:19.744329   63427 node_conditions.go:105] duration metric: took 3.45695ms to run NodePressure ...
	I1009 20:18:19.744346   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:20.036577   63427 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040661   63427 kubeadm.go:739] kubelet initialised
	I1009 20:18:20.040683   63427 kubeadm.go:740] duration metric: took 4.08281ms waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040692   63427 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:20.047699   63427 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.052483   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052504   63427 pod_ready.go:82] duration metric: took 4.782367ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.052511   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052518   63427 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.056863   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056892   63427 pod_ready.go:82] duration metric: took 4.363688ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.056903   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056911   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.061762   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061786   63427 pod_ready.go:82] duration metric: took 4.867975ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.061796   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061804   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.135742   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135769   63427 pod_ready.go:82] duration metric: took 73.952718ms for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.135779   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135785   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.534419   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534449   63427 pod_ready.go:82] duration metric: took 398.656543ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.534459   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534466   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.935390   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935416   63427 pod_ready.go:82] duration metric: took 400.943577ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.935426   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935432   63427 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:21.336052   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336081   63427 pod_ready.go:82] duration metric: took 400.640044ms for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:21.336093   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336102   63427 pod_ready.go:39] duration metric: took 1.295400779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:21.336122   63427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:18:21.349596   63427 ops.go:34] apiserver oom_adj: -16
	I1009 20:18:21.349616   63427 kubeadm.go:597] duration metric: took 8.764995466s to restartPrimaryControlPlane
	I1009 20:18:21.349624   63427 kubeadm.go:394] duration metric: took 8.815768617s to StartCluster
	I1009 20:18:21.349639   63427 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.349716   63427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:18:21.351335   63427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.351607   63427 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:21.351692   63427 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:21.351813   63427 addons.go:69] Setting storage-provisioner=true in profile "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting metrics-server=true in profile "no-preload-480205"
	I1009 20:18:21.351832   63427 addons.go:234] Setting addon storage-provisioner=true in "no-preload-480205"
	I1009 20:18:21.351836   63427 addons.go:234] Setting addon metrics-server=true in "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting default-storageclass=true in profile "no-preload-480205"
	I1009 20:18:21.351845   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:18:21.351883   63427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-480205"
	W1009 20:18:21.351840   63427 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:18:21.351986   63427 host.go:66] Checking if "no-preload-480205" exists ...
	W1009 20:18:21.351843   63427 addons.go:243] addon metrics-server should already be in state true
	I1009 20:18:21.352071   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.352345   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352389   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352398   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352424   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352457   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352489   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.353957   63427 out.go:177] * Verifying Kubernetes components...
	I1009 20:18:21.355218   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:21.371429   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I1009 20:18:21.371808   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.372342   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.372372   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.372777   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.372988   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.376878   63427 addons.go:234] Setting addon default-storageclass=true in "no-preload-480205"
	W1009 20:18:21.376899   63427 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:18:21.376926   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.377284   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.377323   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.390054   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I1009 20:18:21.390616   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I1009 20:18:21.391127   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391270   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391803   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.391830   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392008   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.392033   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392208   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392359   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392734   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.392776   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.392957   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.393001   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.397090   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1009 20:18:21.397605   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.398086   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.398105   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.398405   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.398921   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.398966   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.408719   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1009 20:18:21.408929   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1009 20:18:21.409048   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409326   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409582   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409594   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409876   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409893   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409956   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410100   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.410223   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410564   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.412097   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.412300   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.414239   63427 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:21.414326   63427 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:18:19.381608   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.415507   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:18:21.415525   63427 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.415530   63427 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:18:21.415536   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:21.415548   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.415549   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.417045   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I1009 20:18:21.417788   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.418610   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.418626   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.418981   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419016   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.419279   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.419611   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.419631   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419760   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.419897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.420028   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.420123   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.420454   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420758   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.420943   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.420963   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420969   63427 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.420989   63427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:21.421002   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.421193   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.421373   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.421545   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.421675   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.423520   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425058   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.425099   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.425124   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425247   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.425381   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.425511   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.558337   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:21.587934   63427 node_ready.go:35] waiting up to 6m0s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:21.692866   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.705177   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:18:21.705201   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:18:21.724872   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.796761   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:18:21.796789   63427 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:18:21.846162   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:21.846187   63427 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:18:21.880785   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:22.146852   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.146879   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147190   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147241   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147254   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.147266   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.147280   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147532   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147534   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147591   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.161873   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.161893   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.162134   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.162156   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.162162   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966531   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24162682s)
	I1009 20:18:22.966588   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966603   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966536   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.085706223s)
	I1009 20:18:22.966699   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966712   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966892   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.966932   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.966939   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966947   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966954   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967001   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967020   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967040   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967073   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.967086   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967234   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967258   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967332   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967342   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967356   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967365   63427 addons.go:475] Verifying addon metrics-server=true in "no-preload-480205"
	I1009 20:18:22.969240   63427 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1009 20:18:22.970479   63427 addons.go:510] duration metric: took 1.618800365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1009 20:18:20.580980   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:22.581407   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:20.075155   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:20.575362   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.074859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.574637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.074532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.574916   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.075357   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.574640   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.074579   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.574711   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.879983   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:26.380696   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:23.592071   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:26.091763   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:24.581861   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:27.082730   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:25.075032   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:25.575412   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.075470   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.574434   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.074827   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.074653   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.575222   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.075440   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.575192   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.880597   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:28.592011   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:29.091688   63427 node_ready.go:49] node "no-preload-480205" has status "Ready":"True"
	I1009 20:18:29.091710   63427 node_ready.go:38] duration metric: took 7.503746219s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:29.091719   63427 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:29.097050   63427 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101164   63427 pod_ready.go:93] pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.101185   63427 pod_ready.go:82] duration metric: took 4.107489ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101195   63427 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105318   63427 pod_ready.go:93] pod "etcd-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.105337   63427 pod_ready.go:82] duration metric: took 4.133854ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105348   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108895   63427 pod_ready.go:93] pod "kube-apiserver-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.108910   63427 pod_ready.go:82] duration metric: took 3.556306ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108920   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.114777   63427 pod_ready.go:103] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.615669   63427 pod_ready.go:93] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.615692   63427 pod_ready.go:82] duration metric: took 2.506765342s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.615703   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620649   63427 pod_ready.go:93] pod "kube-proxy-vbpbk" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.620670   63427 pod_ready.go:82] duration metric: took 4.959968ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620682   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892060   63427 pod_ready.go:93] pod "kube-scheduler-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.892081   63427 pod_ready.go:82] duration metric: took 271.38787ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892089   63427 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.580683   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.581273   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.075304   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:30.574688   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.075159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.574404   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.074889   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.575136   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.074459   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.574779   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.074797   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.574832   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.380854   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.880599   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.899462   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.397489   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.582344   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.081582   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.074501   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:35.574403   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.075399   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.575034   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.074714   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.574446   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.074619   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.574644   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.074530   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.574700   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.881601   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.380041   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.380712   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.397848   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.398202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.400630   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.582883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:41.080905   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.074863   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:40.575174   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.075008   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.574859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.074972   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.574851   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.074805   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.575033   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.074718   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.575423   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.880876   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.881328   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:44.898897   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:47.399335   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:43.581383   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.081078   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:48.081422   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:45.074591   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:45.575195   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.075303   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.575186   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:46.575288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:46.614320   64287 cri.go:89] found id: ""
	I1009 20:18:46.614343   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.614351   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:46.614357   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:46.614402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:46.646355   64287 cri.go:89] found id: ""
	I1009 20:18:46.646384   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.646395   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:46.646403   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:46.646450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:46.678758   64287 cri.go:89] found id: ""
	I1009 20:18:46.678788   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.678798   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:46.678805   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:46.678859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:46.721469   64287 cri.go:89] found id: ""
	I1009 20:18:46.721496   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.721507   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:46.721514   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:46.721573   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:46.759822   64287 cri.go:89] found id: ""
	I1009 20:18:46.759853   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.759861   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:46.759866   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:46.759923   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:46.798221   64287 cri.go:89] found id: ""
	I1009 20:18:46.798250   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.798261   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:46.798268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:46.798327   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:46.832044   64287 cri.go:89] found id: ""
	I1009 20:18:46.832067   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.832075   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:46.832080   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:46.832143   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:46.865003   64287 cri.go:89] found id: ""
	I1009 20:18:46.865030   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.865041   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:46.865051   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:46.865066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:46.916927   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:46.916964   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:46.930547   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:46.930576   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:47.042476   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:47.042501   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:47.042516   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:47.116701   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:47.116732   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:48.888593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:51.380593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.899106   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:52.397825   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:50.580775   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:53.081256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.659335   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:49.672837   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:49.672906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:49.709722   64287 cri.go:89] found id: ""
	I1009 20:18:49.709750   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.709761   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:49.709769   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:49.709827   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:49.741187   64287 cri.go:89] found id: ""
	I1009 20:18:49.741209   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.741216   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:49.741221   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:49.741278   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:49.782564   64287 cri.go:89] found id: ""
	I1009 20:18:49.782593   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.782603   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:49.782610   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:49.782667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:49.820586   64287 cri.go:89] found id: ""
	I1009 20:18:49.820618   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.820628   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:49.820634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:49.820688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:49.854573   64287 cri.go:89] found id: ""
	I1009 20:18:49.854600   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.854608   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:49.854615   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:49.854672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:49.889947   64287 cri.go:89] found id: ""
	I1009 20:18:49.889976   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.889986   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:49.889993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:49.890049   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:49.925309   64287 cri.go:89] found id: ""
	I1009 20:18:49.925339   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.925350   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:49.925357   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:49.925432   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:49.961993   64287 cri.go:89] found id: ""
	I1009 20:18:49.962019   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.962029   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:49.962039   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:49.962053   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:50.051610   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:50.051642   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:50.092363   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:50.092388   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:50.145606   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:50.145639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:50.160017   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:50.160047   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:50.231984   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:52.733040   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:52.748018   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:52.748075   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:52.789413   64287 cri.go:89] found id: ""
	I1009 20:18:52.789440   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.789452   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:52.789458   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:52.789514   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:52.823188   64287 cri.go:89] found id: ""
	I1009 20:18:52.823219   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.823229   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:52.823237   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:52.823305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:52.858675   64287 cri.go:89] found id: ""
	I1009 20:18:52.858704   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.858716   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:52.858724   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:52.858782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:52.893243   64287 cri.go:89] found id: ""
	I1009 20:18:52.893277   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.893287   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:52.893295   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:52.893363   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:52.928209   64287 cri.go:89] found id: ""
	I1009 20:18:52.928240   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.928248   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:52.928255   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:52.928314   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:52.962418   64287 cri.go:89] found id: ""
	I1009 20:18:52.962446   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.962455   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:52.962461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:52.962510   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:52.996276   64287 cri.go:89] found id: ""
	I1009 20:18:52.996304   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.996315   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:52.996322   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:52.996380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:53.029693   64287 cri.go:89] found id: ""
	I1009 20:18:53.029718   64287 logs.go:282] 0 containers: []
	W1009 20:18:53.029728   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:53.029738   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:53.029752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:53.042690   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:53.042713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:53.114114   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:53.114132   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:53.114143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:53.192280   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:53.192314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:53.230392   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:53.230416   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:53.380621   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.881245   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:54.399437   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:56.900141   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.580802   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:58.082285   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.781562   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:55.795951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:55.796017   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:55.836037   64287 cri.go:89] found id: ""
	I1009 20:18:55.836065   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.836074   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:55.836080   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:55.836126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:55.870534   64287 cri.go:89] found id: ""
	I1009 20:18:55.870564   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.870574   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:55.870580   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:55.870647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:55.906415   64287 cri.go:89] found id: ""
	I1009 20:18:55.906438   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.906447   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:55.906454   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:55.906507   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:55.943387   64287 cri.go:89] found id: ""
	I1009 20:18:55.943414   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.943424   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:55.943431   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:55.943489   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:55.977004   64287 cri.go:89] found id: ""
	I1009 20:18:55.977027   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.977036   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:55.977044   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:55.977120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:56.015608   64287 cri.go:89] found id: ""
	I1009 20:18:56.015634   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.015648   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:56.015654   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:56.015703   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:56.049324   64287 cri.go:89] found id: ""
	I1009 20:18:56.049355   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.049366   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:56.049375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:56.049428   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:56.084914   64287 cri.go:89] found id: ""
	I1009 20:18:56.084937   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.084946   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:56.084955   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:56.084975   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:56.098176   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:56.098197   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:56.178386   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:56.178403   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:56.178414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:56.256547   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:56.256582   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:56.294138   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:56.294170   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:58.851568   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:58.865845   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:58.865902   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:58.904144   64287 cri.go:89] found id: ""
	I1009 20:18:58.904169   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.904177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:58.904194   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:58.904267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:58.936739   64287 cri.go:89] found id: ""
	I1009 20:18:58.936769   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.936780   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:58.936790   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:58.936848   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:58.971592   64287 cri.go:89] found id: ""
	I1009 20:18:58.971623   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.971631   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:58.971638   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:58.971690   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:59.007176   64287 cri.go:89] found id: ""
	I1009 20:18:59.007205   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.007228   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:59.007234   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:59.007283   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:59.041760   64287 cri.go:89] found id: ""
	I1009 20:18:59.041789   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.041800   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:59.041807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:59.041865   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:59.077912   64287 cri.go:89] found id: ""
	I1009 20:18:59.077940   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.077951   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:59.077958   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:59.078014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:59.110669   64287 cri.go:89] found id: ""
	I1009 20:18:59.110701   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.110712   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:59.110720   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:59.110799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:59.144869   64287 cri.go:89] found id: ""
	I1009 20:18:59.144897   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.144907   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:59.144917   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:59.144952   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:59.229014   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:59.229054   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:59.272687   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:59.272725   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:59.328090   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:59.328123   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:59.342264   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:59.342294   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:59.419880   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:58.379973   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.381314   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.382266   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:59.398378   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.898047   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.581003   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.581660   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.920869   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:01.933620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:01.933685   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:01.967549   64287 cri.go:89] found id: ""
	I1009 20:19:01.967577   64287 logs.go:282] 0 containers: []
	W1009 20:19:01.967585   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:01.967590   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:01.967675   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:02.005465   64287 cri.go:89] found id: ""
	I1009 20:19:02.005491   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.005500   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:02.005505   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:02.005558   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:02.038140   64287 cri.go:89] found id: ""
	I1009 20:19:02.038162   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.038170   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:02.038176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:02.038219   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:02.070394   64287 cri.go:89] found id: ""
	I1009 20:19:02.070423   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.070434   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:02.070442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:02.070505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:02.110634   64287 cri.go:89] found id: ""
	I1009 20:19:02.110655   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.110663   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:02.110669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:02.110723   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:02.166408   64287 cri.go:89] found id: ""
	I1009 20:19:02.166445   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.166457   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:02.166467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:02.166541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:02.218816   64287 cri.go:89] found id: ""
	I1009 20:19:02.218846   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.218856   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:02.218862   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:02.218914   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:02.265090   64287 cri.go:89] found id: ""
	I1009 20:19:02.265118   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.265130   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:02.265140   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:02.265156   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:02.278134   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:02.278160   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:02.348422   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:02.348453   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:02.348467   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:02.429614   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:02.429651   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:02.469100   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:02.469132   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:04.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.881374   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:04.397774   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.402923   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.081386   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:07.580670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.020914   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:05.034760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:05.034833   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:05.071078   64287 cri.go:89] found id: ""
	I1009 20:19:05.071109   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.071120   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:05.071128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:05.071190   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:05.105517   64287 cri.go:89] found id: ""
	I1009 20:19:05.105545   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.105553   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:05.105558   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:05.105607   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:05.139601   64287 cri.go:89] found id: ""
	I1009 20:19:05.139624   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.139632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:05.139637   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:05.139682   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:05.174329   64287 cri.go:89] found id: ""
	I1009 20:19:05.174351   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.174359   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:05.174365   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:05.174410   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:05.212336   64287 cri.go:89] found id: ""
	I1009 20:19:05.212368   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.212377   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:05.212383   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:05.212464   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:05.251822   64287 cri.go:89] found id: ""
	I1009 20:19:05.251844   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.251851   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:05.251857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:05.251901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:05.291055   64287 cri.go:89] found id: ""
	I1009 20:19:05.291097   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.291106   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:05.291111   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:05.291160   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:05.327223   64287 cri.go:89] found id: ""
	I1009 20:19:05.327248   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.327256   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:05.327266   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:05.327281   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:05.377047   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:05.377086   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:05.391232   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:05.391263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:05.464815   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:05.464837   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:05.464850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:05.542581   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:05.542616   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:08.084504   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:08.100466   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:08.100535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:08.138451   64287 cri.go:89] found id: ""
	I1009 20:19:08.138481   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.138489   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:08.138494   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:08.138551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:08.176839   64287 cri.go:89] found id: ""
	I1009 20:19:08.176867   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.176877   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:08.176884   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:08.176941   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:08.234435   64287 cri.go:89] found id: ""
	I1009 20:19:08.234461   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.234472   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:08.234479   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:08.234544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:08.270727   64287 cri.go:89] found id: ""
	I1009 20:19:08.270753   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.270764   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:08.270771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:08.270831   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:08.305139   64287 cri.go:89] found id: ""
	I1009 20:19:08.305167   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.305177   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:08.305185   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:08.305237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:08.338153   64287 cri.go:89] found id: ""
	I1009 20:19:08.338197   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.338209   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:08.338217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:08.338272   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:08.376046   64287 cri.go:89] found id: ""
	I1009 20:19:08.376073   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.376081   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:08.376087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:08.376144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:08.416555   64287 cri.go:89] found id: ""
	I1009 20:19:08.416595   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.416606   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:08.416617   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:08.416630   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:08.470868   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:08.470898   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:08.486601   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:08.486623   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:08.563325   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:08.563363   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:08.563378   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:08.643743   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:08.643778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:09.380849   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.881773   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:08.898969   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.399277   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:09.580913   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.581693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.197637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:11.210992   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:11.211078   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:11.248309   64287 cri.go:89] found id: ""
	I1009 20:19:11.248331   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.248339   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:11.248345   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:11.248388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:11.282511   64287 cri.go:89] found id: ""
	I1009 20:19:11.282537   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.282546   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:11.282551   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:11.282603   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:11.319447   64287 cri.go:89] found id: ""
	I1009 20:19:11.319473   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.319480   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:11.319486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:11.319543   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:11.353838   64287 cri.go:89] found id: ""
	I1009 20:19:11.353866   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.353879   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:11.353887   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:11.353951   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:11.395257   64287 cri.go:89] found id: ""
	I1009 20:19:11.395288   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.395300   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:11.395309   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:11.395373   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:11.434406   64287 cri.go:89] found id: ""
	I1009 20:19:11.434430   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.434438   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:11.434445   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:11.434506   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:11.468162   64287 cri.go:89] found id: ""
	I1009 20:19:11.468184   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.468192   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:11.468197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:11.468252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:11.500214   64287 cri.go:89] found id: ""
	I1009 20:19:11.500247   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.500257   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:11.500267   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:11.500282   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:11.566430   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:11.566449   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:11.566463   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:11.642784   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:11.642815   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:11.680882   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:11.680908   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:11.731386   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:11.731414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.245696   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:14.258882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:14.258948   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:14.293339   64287 cri.go:89] found id: ""
	I1009 20:19:14.293365   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.293372   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:14.293379   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:14.293424   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:14.327246   64287 cri.go:89] found id: ""
	I1009 20:19:14.327268   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.327275   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:14.327287   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:14.327334   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:14.366384   64287 cri.go:89] found id: ""
	I1009 20:19:14.366412   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.366423   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:14.366430   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:14.366498   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:14.403913   64287 cri.go:89] found id: ""
	I1009 20:19:14.403950   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.403958   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:14.403965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:14.404021   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:14.442655   64287 cri.go:89] found id: ""
	I1009 20:19:14.442684   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.442694   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:14.442702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:14.442749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:14.477895   64287 cri.go:89] found id: ""
	I1009 20:19:14.477921   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.477928   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:14.477934   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:14.477979   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:14.512833   64287 cri.go:89] found id: ""
	I1009 20:19:14.512871   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.512882   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:14.512889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:14.512955   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:14.546557   64287 cri.go:89] found id: ""
	I1009 20:19:14.546582   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.546590   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:14.546597   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:14.546610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:14.599579   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:14.599610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.613347   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:14.613371   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:14.380816   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.879793   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.399353   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:15.899223   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.584162   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.081179   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:14.689272   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:14.689295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:14.689306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:14.770362   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:14.770394   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:17.312105   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:17.326851   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:17.326906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:17.364760   64287 cri.go:89] found id: ""
	I1009 20:19:17.364785   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.364793   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:17.364799   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:17.364851   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:17.398149   64287 cri.go:89] found id: ""
	I1009 20:19:17.398172   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.398181   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:17.398189   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:17.398247   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:17.432746   64287 cri.go:89] found id: ""
	I1009 20:19:17.432778   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.432789   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:17.432797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:17.432846   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:17.468095   64287 cri.go:89] found id: ""
	I1009 20:19:17.468125   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.468137   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:17.468145   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:17.468206   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:17.503152   64287 cri.go:89] found id: ""
	I1009 20:19:17.503184   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.503196   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:17.503203   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:17.503257   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:17.543966   64287 cri.go:89] found id: ""
	I1009 20:19:17.543993   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.544002   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:17.544008   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:17.544077   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:17.582780   64287 cri.go:89] found id: ""
	I1009 20:19:17.582801   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.582809   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:17.582814   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:17.582860   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:17.621907   64287 cri.go:89] found id: ""
	I1009 20:19:17.621933   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.621942   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:17.621951   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:17.621963   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:17.674239   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:17.674271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:17.688301   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:17.688331   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:17.759965   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:17.759989   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:17.760005   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:17.836052   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:17.836087   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:18.880033   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:21.381550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.399116   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.898441   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:22.899243   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.581486   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:23.081145   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.380237   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:20.393343   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:20.393409   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:20.427462   64287 cri.go:89] found id: ""
	I1009 20:19:20.427491   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.427501   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:20.427509   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:20.427560   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:20.463708   64287 cri.go:89] found id: ""
	I1009 20:19:20.463736   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.463747   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:20.463754   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:20.463818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:20.497898   64287 cri.go:89] found id: ""
	I1009 20:19:20.497924   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.497931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:20.497937   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:20.497985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:20.531880   64287 cri.go:89] found id: ""
	I1009 20:19:20.531910   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.531918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:20.531923   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:20.531971   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:20.565291   64287 cri.go:89] found id: ""
	I1009 20:19:20.565319   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.565330   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:20.565342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:20.565390   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:20.604786   64287 cri.go:89] found id: ""
	I1009 20:19:20.604815   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.604827   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:20.604835   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:20.604891   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:20.646136   64287 cri.go:89] found id: ""
	I1009 20:19:20.646161   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.646169   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:20.646175   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:20.646231   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:20.687503   64287 cri.go:89] found id: ""
	I1009 20:19:20.687527   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.687540   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:20.687548   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:20.687560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:20.738026   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:20.738057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:20.751432   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:20.751459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:20.826192   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:20.826219   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:20.826239   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:20.905874   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:20.905900   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.445277   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:23.460245   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:23.460305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:23.503559   64287 cri.go:89] found id: ""
	I1009 20:19:23.503582   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.503590   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:23.503596   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:23.503652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:23.542748   64287 cri.go:89] found id: ""
	I1009 20:19:23.542783   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.542791   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:23.542797   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:23.542857   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:23.585668   64287 cri.go:89] found id: ""
	I1009 20:19:23.585689   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.585696   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:23.585702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:23.585753   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:23.623863   64287 cri.go:89] found id: ""
	I1009 20:19:23.623884   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.623891   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:23.623897   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:23.623952   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:23.657025   64287 cri.go:89] found id: ""
	I1009 20:19:23.657049   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.657057   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:23.657063   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:23.657120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:23.692536   64287 cri.go:89] found id: ""
	I1009 20:19:23.692573   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.692583   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:23.692590   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:23.692657   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:23.732552   64287 cri.go:89] found id: ""
	I1009 20:19:23.732580   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.732591   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:23.732599   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:23.732645   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:23.767308   64287 cri.go:89] found id: ""
	I1009 20:19:23.767345   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.767356   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:23.767366   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:23.767380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:23.780909   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:23.780948   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:23.853312   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:23.853340   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:23.853355   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:23.934930   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:23.934968   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.977906   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:23.977943   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:23.881669   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.380447   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.397833   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.398843   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.082071   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.580992   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.530146   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:26.545527   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:26.545598   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:26.580942   64287 cri.go:89] found id: ""
	I1009 20:19:26.580970   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.580981   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:26.580988   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:26.581050   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:26.621165   64287 cri.go:89] found id: ""
	I1009 20:19:26.621188   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.621195   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:26.621201   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:26.621245   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:26.655664   64287 cri.go:89] found id: ""
	I1009 20:19:26.655690   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.655697   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:26.655703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:26.655749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:26.691951   64287 cri.go:89] found id: ""
	I1009 20:19:26.691973   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.691981   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:26.691987   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:26.692033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:26.728905   64287 cri.go:89] found id: ""
	I1009 20:19:26.728937   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.728948   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:26.728955   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:26.729013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:26.763673   64287 cri.go:89] found id: ""
	I1009 20:19:26.763697   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.763705   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:26.763711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:26.763765   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:26.798507   64287 cri.go:89] found id: ""
	I1009 20:19:26.798535   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.798547   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:26.798554   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:26.798615   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:26.836114   64287 cri.go:89] found id: ""
	I1009 20:19:26.836140   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.836148   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:26.836156   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:26.836169   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:26.914136   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:26.914160   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:26.914175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:26.995023   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:26.995055   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:27.033788   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:27.033817   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:27.084313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:27.084341   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.597899   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:29.611695   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:29.611756   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:28.381564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.881085   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.899697   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.398514   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.081670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.580939   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.646690   64287 cri.go:89] found id: ""
	I1009 20:19:29.646718   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.646726   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:29.646732   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:29.646780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:29.681379   64287 cri.go:89] found id: ""
	I1009 20:19:29.681408   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.681418   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:29.681425   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:29.681481   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:29.717988   64287 cri.go:89] found id: ""
	I1009 20:19:29.718012   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.718020   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:29.718026   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:29.718076   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:29.752783   64287 cri.go:89] found id: ""
	I1009 20:19:29.752815   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.752825   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:29.752833   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:29.752883   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:29.786079   64287 cri.go:89] found id: ""
	I1009 20:19:29.786105   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.786114   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:29.786120   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:29.786167   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:29.820630   64287 cri.go:89] found id: ""
	I1009 20:19:29.820655   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.820663   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:29.820669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:29.820727   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:29.855992   64287 cri.go:89] found id: ""
	I1009 20:19:29.856022   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.856033   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:29.856040   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:29.856096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:29.891196   64287 cri.go:89] found id: ""
	I1009 20:19:29.891224   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.891234   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:29.891244   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:29.891257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:29.945636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:29.945665   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.959715   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:29.959741   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:30.034023   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:30.034046   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:30.034066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:30.109512   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:30.109545   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.651252   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:32.665196   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:32.665253   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:32.701468   64287 cri.go:89] found id: ""
	I1009 20:19:32.701497   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.701516   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:32.701525   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:32.701581   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:32.740585   64287 cri.go:89] found id: ""
	I1009 20:19:32.740611   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.740623   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:32.740629   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:32.740699   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:32.773765   64287 cri.go:89] found id: ""
	I1009 20:19:32.773792   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.773803   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:32.773810   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:32.773869   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:32.812647   64287 cri.go:89] found id: ""
	I1009 20:19:32.812680   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.812695   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:32.812702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:32.812752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:32.847044   64287 cri.go:89] found id: ""
	I1009 20:19:32.847092   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.847101   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:32.847107   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:32.847153   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:32.885410   64287 cri.go:89] found id: ""
	I1009 20:19:32.885439   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.885448   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:32.885455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:32.885515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:32.922917   64287 cri.go:89] found id: ""
	I1009 20:19:32.922944   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.922955   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:32.922963   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:32.923026   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:32.958993   64287 cri.go:89] found id: ""
	I1009 20:19:32.959019   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.959027   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:32.959037   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:32.959052   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.996844   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:32.996871   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:33.047684   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:33.047715   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:33.061829   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:33.061856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:33.135278   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:33.135302   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:33.135314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:33.380221   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.380648   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:34.897646   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:36.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.081326   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:37.580347   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.722479   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:35.736670   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:35.736745   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:35.778594   64287 cri.go:89] found id: ""
	I1009 20:19:35.778617   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.778625   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:35.778630   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:35.778677   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:35.810906   64287 cri.go:89] found id: ""
	I1009 20:19:35.810934   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.810945   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:35.810954   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:35.811014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:35.846226   64287 cri.go:89] found id: ""
	I1009 20:19:35.846258   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.846269   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:35.846277   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:35.846325   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:35.880509   64287 cri.go:89] found id: ""
	I1009 20:19:35.880536   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.880547   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:35.880555   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:35.880613   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:35.916039   64287 cri.go:89] found id: ""
	I1009 20:19:35.916067   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.916077   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:35.916085   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:35.916142   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:35.948068   64287 cri.go:89] found id: ""
	I1009 20:19:35.948099   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.948107   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:35.948113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:35.948168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:35.982531   64287 cri.go:89] found id: ""
	I1009 20:19:35.982556   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.982565   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:35.982571   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:35.982618   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:36.016284   64287 cri.go:89] found id: ""
	I1009 20:19:36.016307   64287 logs.go:282] 0 containers: []
	W1009 20:19:36.016314   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:36.016324   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:36.016333   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:36.096773   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:36.096807   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:36.135382   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:36.135408   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:36.189157   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:36.189189   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:36.202243   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:36.202272   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:36.289968   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:38.790894   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:38.804960   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:38.805020   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:38.840867   64287 cri.go:89] found id: ""
	I1009 20:19:38.840891   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.840898   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:38.840904   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:38.840961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:38.877659   64287 cri.go:89] found id: ""
	I1009 20:19:38.877686   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.877695   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:38.877709   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:38.877768   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:38.917914   64287 cri.go:89] found id: ""
	I1009 20:19:38.917938   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.917947   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:38.917954   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:38.918011   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:38.955879   64287 cri.go:89] found id: ""
	I1009 20:19:38.955907   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.955918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:38.955925   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:38.955985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:38.991683   64287 cri.go:89] found id: ""
	I1009 20:19:38.991712   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.991723   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:38.991730   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:38.991815   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:39.026167   64287 cri.go:89] found id: ""
	I1009 20:19:39.026192   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.026199   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:39.026205   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:39.026273   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:39.061646   64287 cri.go:89] found id: ""
	I1009 20:19:39.061676   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.061692   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:39.061699   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:39.061760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:39.097660   64287 cri.go:89] found id: ""
	I1009 20:19:39.097687   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.097696   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:39.097706   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:39.097720   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:39.149199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:39.149232   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:39.162366   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:39.162391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:39.237267   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:39.237295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:39.237310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:39.320531   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:39.320566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:37.882355   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:40.380792   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.381234   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:38.899362   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.397980   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:39.580565   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.081212   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.865807   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:41.880948   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:41.881015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:41.917675   64287 cri.go:89] found id: ""
	I1009 20:19:41.917703   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.917714   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:41.917722   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:41.917780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:41.957152   64287 cri.go:89] found id: ""
	I1009 20:19:41.957180   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.957189   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:41.957194   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:41.957250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:42.008129   64287 cri.go:89] found id: ""
	I1009 20:19:42.008153   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.008162   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:42.008170   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:42.008232   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:42.042628   64287 cri.go:89] found id: ""
	I1009 20:19:42.042651   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.042658   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:42.042669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:42.042712   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:42.080123   64287 cri.go:89] found id: ""
	I1009 20:19:42.080147   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.080155   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:42.080161   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:42.080214   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:42.120070   64287 cri.go:89] found id: ""
	I1009 20:19:42.120099   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.120108   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:42.120114   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:42.120161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:42.153686   64287 cri.go:89] found id: ""
	I1009 20:19:42.153717   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.153727   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:42.153735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:42.153805   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:42.187793   64287 cri.go:89] found id: ""
	I1009 20:19:42.187820   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.187832   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:42.187842   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:42.187856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:42.267510   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:42.267545   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:42.267559   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:42.348061   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:42.348095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:42.393407   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:42.393431   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:42.448547   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:42.448580   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:44.381312   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:46.881511   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:43.398743   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:45.398982   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.898041   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.081990   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.963603   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:44.977341   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:44.977417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:45.018729   64287 cri.go:89] found id: ""
	I1009 20:19:45.018756   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.018764   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:45.018770   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:45.018821   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:45.055232   64287 cri.go:89] found id: ""
	I1009 20:19:45.055259   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.055267   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:45.055273   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:45.055332   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:45.090575   64287 cri.go:89] found id: ""
	I1009 20:19:45.090604   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.090614   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:45.090620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:45.090692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:45.126426   64287 cri.go:89] found id: ""
	I1009 20:19:45.126452   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.126459   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:45.126465   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:45.126523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:45.166192   64287 cri.go:89] found id: ""
	I1009 20:19:45.166223   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.166232   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:45.166239   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:45.166301   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:45.200353   64287 cri.go:89] found id: ""
	I1009 20:19:45.200384   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.200400   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:45.200406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:45.200454   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:45.235696   64287 cri.go:89] found id: ""
	I1009 20:19:45.235729   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.235740   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:45.235747   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:45.235807   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:45.271937   64287 cri.go:89] found id: ""
	I1009 20:19:45.271969   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.271979   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:45.271990   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:45.272004   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:45.347600   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:45.347635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:45.392203   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:45.392229   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:45.444012   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:45.444045   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:45.458106   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:45.458130   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:45.540275   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.041410   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:48.057834   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:48.057889   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:48.094318   64287 cri.go:89] found id: ""
	I1009 20:19:48.094346   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.094355   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:48.094362   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:48.094406   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:48.129645   64287 cri.go:89] found id: ""
	I1009 20:19:48.129672   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.129683   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:48.129691   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:48.129743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:48.164423   64287 cri.go:89] found id: ""
	I1009 20:19:48.164446   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.164454   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:48.164460   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:48.164519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:48.197708   64287 cri.go:89] found id: ""
	I1009 20:19:48.197736   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.197745   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:48.197750   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:48.197796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:48.235885   64287 cri.go:89] found id: ""
	I1009 20:19:48.235913   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.235925   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:48.235931   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:48.235995   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:48.272458   64287 cri.go:89] found id: ""
	I1009 20:19:48.272492   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.272504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:48.272513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:48.272580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:48.307152   64287 cri.go:89] found id: ""
	I1009 20:19:48.307180   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.307190   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:48.307197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:48.307255   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:48.347335   64287 cri.go:89] found id: ""
	I1009 20:19:48.347366   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.347376   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:48.347387   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:48.347401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:48.418125   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:48.418161   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:48.433361   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:48.433386   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:48.524863   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.524879   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:48.524890   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:48.612196   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:48.612247   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:49.380735   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.898962   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.899005   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.581882   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.582193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.149683   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:51.164603   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:51.164663   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:51.197120   64287 cri.go:89] found id: ""
	I1009 20:19:51.197151   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.197162   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:51.197170   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:51.197228   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:51.233612   64287 cri.go:89] found id: ""
	I1009 20:19:51.233641   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.233651   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:51.233660   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:51.233726   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:51.267119   64287 cri.go:89] found id: ""
	I1009 20:19:51.267150   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.267159   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:51.267168   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:51.267233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:51.301816   64287 cri.go:89] found id: ""
	I1009 20:19:51.301845   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.301854   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:51.301859   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:51.301917   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:51.335483   64287 cri.go:89] found id: ""
	I1009 20:19:51.335524   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.335535   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:51.335543   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:51.335604   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:51.370207   64287 cri.go:89] found id: ""
	I1009 20:19:51.370241   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.370252   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:51.370258   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:51.370320   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:51.406925   64287 cri.go:89] found id: ""
	I1009 20:19:51.406949   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.406956   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:51.406962   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:51.407015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:51.446354   64287 cri.go:89] found id: ""
	I1009 20:19:51.446378   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.446386   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:51.446394   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:51.446405   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:51.496627   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:51.496657   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:51.509587   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:51.509610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:51.583276   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:51.583295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:51.583306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:51.661552   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:51.661584   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:54.202782   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:54.227761   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:54.227829   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:54.261338   64287 cri.go:89] found id: ""
	I1009 20:19:54.261366   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.261374   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:54.261381   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:54.261435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:54.300387   64287 cri.go:89] found id: ""
	I1009 20:19:54.300414   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.300424   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:54.300429   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:54.300485   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:54.339083   64287 cri.go:89] found id: ""
	I1009 20:19:54.339110   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.339122   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:54.339129   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:54.339180   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:54.374145   64287 cri.go:89] found id: ""
	I1009 20:19:54.374174   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.374182   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:54.374188   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:54.374240   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:54.411872   64287 cri.go:89] found id: ""
	I1009 20:19:54.411904   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.411918   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:54.411926   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:54.411992   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:54.449459   64287 cri.go:89] found id: ""
	I1009 20:19:54.449493   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.449504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:54.449512   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:54.449575   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:54.482728   64287 cri.go:89] found id: ""
	I1009 20:19:54.482752   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.482762   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:54.482770   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:54.482830   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:54.516220   64287 cri.go:89] found id: ""
	I1009 20:19:54.516252   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.516261   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:54.516270   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:54.516280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:54.569531   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:54.569560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:54.583371   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:54.583395   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:53.880843   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.381025   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.399599   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.399727   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.080838   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.081451   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:54.651718   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:54.651742   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:54.651757   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:54.728869   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:54.728903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.270702   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:57.284287   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:57.284351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:57.317235   64287 cri.go:89] found id: ""
	I1009 20:19:57.317269   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.317279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:57.317290   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:57.317349   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:57.350030   64287 cri.go:89] found id: ""
	I1009 20:19:57.350058   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.350066   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:57.350071   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:57.350118   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:57.382840   64287 cri.go:89] found id: ""
	I1009 20:19:57.382867   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.382877   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:57.382884   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:57.382935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:57.417193   64287 cri.go:89] found id: ""
	I1009 20:19:57.417229   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.417239   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:57.417247   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:57.417309   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:57.456417   64287 cri.go:89] found id: ""
	I1009 20:19:57.456445   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.456454   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:57.456461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:57.456523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:57.490156   64287 cri.go:89] found id: ""
	I1009 20:19:57.490185   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.490193   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:57.490199   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:57.490246   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:57.523983   64287 cri.go:89] found id: ""
	I1009 20:19:57.524013   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.524023   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:57.524030   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:57.524093   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:57.562288   64287 cri.go:89] found id: ""
	I1009 20:19:57.562317   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.562325   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:57.562334   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:57.562345   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.602475   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:57.602502   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:57.656636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:57.656668   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:57.670738   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:57.670765   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:57.742943   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:57.742968   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:57.742979   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:58.384537   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.881670   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.897654   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.899099   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:02.899381   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.581059   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:01.081778   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.321926   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:00.335475   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:00.335546   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:00.369727   64287 cri.go:89] found id: ""
	I1009 20:20:00.369762   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.369770   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:00.369776   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:00.369823   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:00.408917   64287 cri.go:89] found id: ""
	I1009 20:20:00.408943   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.408953   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:00.408964   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:00.409013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:00.447646   64287 cri.go:89] found id: ""
	I1009 20:20:00.447676   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.447687   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:00.447694   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:00.447754   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:00.485752   64287 cri.go:89] found id: ""
	I1009 20:20:00.485780   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.485790   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:00.485797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:00.485859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:00.519568   64287 cri.go:89] found id: ""
	I1009 20:20:00.519592   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.519600   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:00.519606   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:00.519667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:00.553288   64287 cri.go:89] found id: ""
	I1009 20:20:00.553323   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.553334   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:00.553342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:00.553402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:00.593842   64287 cri.go:89] found id: ""
	I1009 20:20:00.593868   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.593875   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:00.593882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:00.593938   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:00.630808   64287 cri.go:89] found id: ""
	I1009 20:20:00.630839   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.630849   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:00.630859   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:00.630873   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:00.681858   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:00.681888   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:00.695365   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:00.695391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:00.768651   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:00.768679   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:00.768693   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:00.843999   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:00.844034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.390483   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:03.405406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:03.405476   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:03.440025   64287 cri.go:89] found id: ""
	I1009 20:20:03.440048   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.440055   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:03.440061   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:03.440113   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:03.475407   64287 cri.go:89] found id: ""
	I1009 20:20:03.475440   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.475450   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:03.475456   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:03.475511   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:03.512656   64287 cri.go:89] found id: ""
	I1009 20:20:03.512680   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.512688   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:03.512693   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:03.512749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:03.549174   64287 cri.go:89] found id: ""
	I1009 20:20:03.549204   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.549212   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:03.549217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:03.549282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:03.586093   64287 cri.go:89] found id: ""
	I1009 20:20:03.586118   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.586128   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:03.586135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:03.586201   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:03.624221   64287 cri.go:89] found id: ""
	I1009 20:20:03.624248   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.624258   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:03.624271   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:03.624342   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:03.658759   64287 cri.go:89] found id: ""
	I1009 20:20:03.658781   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.658789   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:03.658794   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:03.658850   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:03.692200   64287 cri.go:89] found id: ""
	I1009 20:20:03.692227   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.692237   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:03.692247   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:03.692263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:03.745949   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:03.745985   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:03.759691   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:03.759724   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:03.833000   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:03.833020   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:03.833034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:03.911321   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:03.911352   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.381014   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.881096   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:04.900780   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:07.398348   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:03.580442   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.582159   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:08.080528   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:06.451158   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:06.466356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:06.466435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:06.502907   64287 cri.go:89] found id: ""
	I1009 20:20:06.502936   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.502944   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:06.502950   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:06.503000   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:06.540938   64287 cri.go:89] found id: ""
	I1009 20:20:06.540961   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.540969   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:06.540974   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:06.541033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:06.575587   64287 cri.go:89] found id: ""
	I1009 20:20:06.575616   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.575632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:06.575640   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:06.575696   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:06.611052   64287 cri.go:89] found id: ""
	I1009 20:20:06.611093   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.611103   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:06.611110   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:06.611170   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:06.647763   64287 cri.go:89] found id: ""
	I1009 20:20:06.647793   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.647804   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:06.647811   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:06.647876   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:06.682423   64287 cri.go:89] found id: ""
	I1009 20:20:06.682449   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.682460   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:06.682471   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:06.682541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:06.718096   64287 cri.go:89] found id: ""
	I1009 20:20:06.718124   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.718135   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:06.718141   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:06.718200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:06.753320   64287 cri.go:89] found id: ""
	I1009 20:20:06.753344   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.753353   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:06.753361   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:06.753375   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:06.809610   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:06.809640   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:06.823651   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:06.823680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:06.895796   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:06.895819   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:06.895833   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:06.972602   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:06.972635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:09.513909   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:09.527143   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:09.527254   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:09.560406   64287 cri.go:89] found id: ""
	I1009 20:20:09.560432   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.560440   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:09.560445   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:09.560493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:09.600180   64287 cri.go:89] found id: ""
	I1009 20:20:09.600202   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.600219   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:09.600225   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:09.600285   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:08.380652   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.880056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.398968   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:11.897696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.081007   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:12.081291   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.638356   64287 cri.go:89] found id: ""
	I1009 20:20:09.638383   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.638393   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:09.638398   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:09.638450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:09.680589   64287 cri.go:89] found id: ""
	I1009 20:20:09.680616   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.680627   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:09.680635   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:09.680686   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:09.719018   64287 cri.go:89] found id: ""
	I1009 20:20:09.719041   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.719049   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:09.719054   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:09.719129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:09.757262   64287 cri.go:89] found id: ""
	I1009 20:20:09.757290   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.757298   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:09.757305   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:09.757364   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:09.796127   64287 cri.go:89] found id: ""
	I1009 20:20:09.796157   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.796168   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:09.796176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:09.796236   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:09.830650   64287 cri.go:89] found id: ""
	I1009 20:20:09.830679   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.830689   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:09.830699   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:09.830713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:09.882638   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:09.882666   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:09.897458   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:09.897488   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:09.964440   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:09.964462   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:09.964473   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:10.040103   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:10.040138   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.590159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:12.603380   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:12.603448   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:12.636246   64287 cri.go:89] found id: ""
	I1009 20:20:12.636272   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.636281   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:12.636288   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:12.636392   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:12.669400   64287 cri.go:89] found id: ""
	I1009 20:20:12.669429   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.669439   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:12.669446   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:12.669493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:12.705076   64287 cri.go:89] found id: ""
	I1009 20:20:12.705104   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.705114   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:12.705122   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:12.705198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:12.738883   64287 cri.go:89] found id: ""
	I1009 20:20:12.738914   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.738926   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:12.738933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:12.738988   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:12.773549   64287 cri.go:89] found id: ""
	I1009 20:20:12.773572   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.773580   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:12.773592   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:12.773709   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:12.813123   64287 cri.go:89] found id: ""
	I1009 20:20:12.813148   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.813156   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:12.813162   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:12.813215   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:12.851272   64287 cri.go:89] found id: ""
	I1009 20:20:12.851305   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.851317   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:12.851325   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:12.851389   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:12.891399   64287 cri.go:89] found id: ""
	I1009 20:20:12.891422   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.891429   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:12.891436   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:12.891455   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:12.945839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:12.945868   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:12.959711   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:12.959735   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:13.028015   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:13.028034   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:13.028048   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:13.108451   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:13.108491   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.881443   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.381891   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.398650   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.401925   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.580306   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.580836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.651166   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:15.664618   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:15.664692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:15.697088   64287 cri.go:89] found id: ""
	I1009 20:20:15.697117   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.697127   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:15.697137   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:15.697198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:15.738641   64287 cri.go:89] found id: ""
	I1009 20:20:15.738671   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.738682   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:15.738690   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:15.738747   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:15.771293   64287 cri.go:89] found id: ""
	I1009 20:20:15.771318   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.771326   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:15.771332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:15.771391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:15.804234   64287 cri.go:89] found id: ""
	I1009 20:20:15.804263   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.804271   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:15.804279   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:15.804329   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:15.840914   64287 cri.go:89] found id: ""
	I1009 20:20:15.840964   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.840975   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:15.840983   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:15.841041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:15.878243   64287 cri.go:89] found id: ""
	I1009 20:20:15.878270   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.878280   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:15.878288   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:15.878344   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:15.917371   64287 cri.go:89] found id: ""
	I1009 20:20:15.917398   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.917409   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:15.917416   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:15.917473   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:15.951443   64287 cri.go:89] found id: ""
	I1009 20:20:15.951470   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.951481   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:15.951490   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:15.951504   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:16.017601   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:16.017629   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:16.017643   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:16.095915   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:16.095946   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:16.141704   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:16.141737   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:16.197391   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:16.197424   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:18.712278   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:18.725451   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:18.725513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:18.757618   64287 cri.go:89] found id: ""
	I1009 20:20:18.757640   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.757650   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:18.757657   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:18.757715   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:18.791651   64287 cri.go:89] found id: ""
	I1009 20:20:18.791677   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.791686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:18.791693   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:18.791750   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:18.826402   64287 cri.go:89] found id: ""
	I1009 20:20:18.826430   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.826440   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:18.826449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:18.826522   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:18.868610   64287 cri.go:89] found id: ""
	I1009 20:20:18.868634   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.868644   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:18.868652   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:18.868710   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:18.905499   64287 cri.go:89] found id: ""
	I1009 20:20:18.905520   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.905527   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:18.905532   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:18.905588   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:18.938772   64287 cri.go:89] found id: ""
	I1009 20:20:18.938795   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.938803   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:18.938809   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:18.938855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:18.974712   64287 cri.go:89] found id: ""
	I1009 20:20:18.974742   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.974753   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:18.974760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:18.974820   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:19.008681   64287 cri.go:89] found id: ""
	I1009 20:20:19.008710   64287 logs.go:282] 0 containers: []
	W1009 20:20:19.008718   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:19.008726   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:19.008736   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:19.059862   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:19.059891   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:19.073071   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:19.073096   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:19.142163   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:19.142189   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:19.142204   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:19.226645   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:19.226691   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:17.880874   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.881553   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:18.898733   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:20.899569   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.081883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.581532   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.767167   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:21.780448   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:21.780530   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:21.813670   64287 cri.go:89] found id: ""
	I1009 20:20:21.813699   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.813708   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:21.813714   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:21.813760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:21.850793   64287 cri.go:89] found id: ""
	I1009 20:20:21.850826   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.850838   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:21.850845   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:21.850904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:21.887886   64287 cri.go:89] found id: ""
	I1009 20:20:21.887919   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.887931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:21.887938   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:21.887987   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:21.926620   64287 cri.go:89] found id: ""
	I1009 20:20:21.926651   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.926661   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:21.926669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:21.926734   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:21.962822   64287 cri.go:89] found id: ""
	I1009 20:20:21.962859   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.962867   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:21.962872   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:21.962932   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:22.001043   64287 cri.go:89] found id: ""
	I1009 20:20:22.001070   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.001080   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:22.001088   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:22.001145   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:22.034111   64287 cri.go:89] found id: ""
	I1009 20:20:22.034139   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.034148   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:22.034153   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:22.034200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:22.067601   64287 cri.go:89] found id: ""
	I1009 20:20:22.067629   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.067640   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:22.067649   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:22.067663   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:22.081545   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:22.081575   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:22.158725   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:22.158749   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:22.158761   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:22.249086   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:22.249133   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:22.287435   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:22.287462   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:24.380294   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.880564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:23.398659   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:25.399216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:27.898475   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.580818   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.838935   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:24.852057   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:24.852126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:24.887454   64287 cri.go:89] found id: ""
	I1009 20:20:24.887488   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.887500   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:24.887507   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:24.887565   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:24.928273   64287 cri.go:89] found id: ""
	I1009 20:20:24.928295   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.928303   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:24.928309   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:24.928367   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:24.962116   64287 cri.go:89] found id: ""
	I1009 20:20:24.962152   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.962164   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:24.962172   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:24.962252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:24.996909   64287 cri.go:89] found id: ""
	I1009 20:20:24.996934   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.996942   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:24.996947   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:24.996996   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:25.030615   64287 cri.go:89] found id: ""
	I1009 20:20:25.030647   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.030658   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:25.030665   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:25.030725   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:25.066069   64287 cri.go:89] found id: ""
	I1009 20:20:25.066096   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.066104   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:25.066109   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:25.066158   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:25.101762   64287 cri.go:89] found id: ""
	I1009 20:20:25.101791   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.101799   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:25.101807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:25.101854   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:25.139704   64287 cri.go:89] found id: ""
	I1009 20:20:25.139730   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.139738   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:25.139745   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:25.139756   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:25.190212   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:25.190257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:25.206181   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:25.206206   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:25.276523   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:25.276548   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:25.276562   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:25.352477   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:25.352509   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:27.894112   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:27.907965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:27.908018   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:27.942933   64287 cri.go:89] found id: ""
	I1009 20:20:27.942959   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.942967   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:27.942973   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:27.943029   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:27.995890   64287 cri.go:89] found id: ""
	I1009 20:20:27.995917   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.995929   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:27.995936   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:27.995985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:28.031877   64287 cri.go:89] found id: ""
	I1009 20:20:28.031904   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.031914   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:28.031922   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:28.031975   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:28.073691   64287 cri.go:89] found id: ""
	I1009 20:20:28.073720   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.073730   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:28.073738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:28.073796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:28.109946   64287 cri.go:89] found id: ""
	I1009 20:20:28.109975   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.109987   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:28.109995   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:28.110041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:28.144771   64287 cri.go:89] found id: ""
	I1009 20:20:28.144801   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.144822   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:28.144830   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:28.144892   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:28.179617   64287 cri.go:89] found id: ""
	I1009 20:20:28.179640   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.179647   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:28.179653   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:28.179698   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:28.213734   64287 cri.go:89] found id: ""
	I1009 20:20:28.213759   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.213767   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:28.213775   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:28.213787   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:28.227778   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:28.227803   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:28.298025   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:28.298057   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:28.298071   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:28.378664   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:28.378700   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:28.417577   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:28.417602   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:29.380480   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.382239   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.396952   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:32.399211   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:29.079718   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.083332   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.968360   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:30.981229   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:30.981295   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:31.013373   64287 cri.go:89] found id: ""
	I1009 20:20:31.013397   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.013408   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:31.013415   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:31.013468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:31.044387   64287 cri.go:89] found id: ""
	I1009 20:20:31.044408   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.044416   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:31.044421   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:31.044490   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:31.079677   64287 cri.go:89] found id: ""
	I1009 20:20:31.079702   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.079718   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:31.079727   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:31.079788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:31.118895   64287 cri.go:89] found id: ""
	I1009 20:20:31.118921   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.118933   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:31.118940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:31.118997   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:31.157008   64287 cri.go:89] found id: ""
	I1009 20:20:31.157035   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.157043   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:31.157049   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:31.157096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:31.188999   64287 cri.go:89] found id: ""
	I1009 20:20:31.189024   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.189032   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:31.189038   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:31.189095   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:31.225314   64287 cri.go:89] found id: ""
	I1009 20:20:31.225341   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.225351   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:31.225359   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:31.225426   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:31.259864   64287 cri.go:89] found id: ""
	I1009 20:20:31.259891   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.259899   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:31.259907   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:31.259918   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:31.333579   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:31.333615   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:31.375852   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:31.375884   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:31.428346   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:31.428377   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:31.442927   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:31.442951   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:31.512924   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:34.013346   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:34.026671   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:34.026729   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:34.062445   64287 cri.go:89] found id: ""
	I1009 20:20:34.062469   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.062479   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:34.062487   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:34.062586   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:34.096670   64287 cri.go:89] found id: ""
	I1009 20:20:34.096692   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.096699   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:34.096705   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:34.096752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:34.133653   64287 cri.go:89] found id: ""
	I1009 20:20:34.133682   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.133702   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:34.133711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:34.133770   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:34.167514   64287 cri.go:89] found id: ""
	I1009 20:20:34.167541   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.167552   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:34.167560   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:34.167631   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:34.200397   64287 cri.go:89] found id: ""
	I1009 20:20:34.200427   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.200438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:34.200446   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:34.200504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:34.236507   64287 cri.go:89] found id: ""
	I1009 20:20:34.236534   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.236544   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:34.236551   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:34.236611   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:34.272611   64287 cri.go:89] found id: ""
	I1009 20:20:34.272639   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.272650   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:34.272658   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:34.272733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:34.311392   64287 cri.go:89] found id: ""
	I1009 20:20:34.311417   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.311426   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:34.311434   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:34.311445   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:34.401718   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:34.401751   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:34.463768   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:34.463798   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:34.526313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:34.526347   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:34.540370   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:34.540401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:34.610697   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:33.880836   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:35.881010   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:34.399526   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.401486   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:33.581544   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.080875   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.085744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:37.111821   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:37.125012   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:37.125073   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:37.165105   64287 cri.go:89] found id: ""
	I1009 20:20:37.165135   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.165144   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:37.165151   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:37.165217   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:37.201367   64287 cri.go:89] found id: ""
	I1009 20:20:37.201393   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.201403   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:37.201412   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:37.201470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:37.234258   64287 cri.go:89] found id: ""
	I1009 20:20:37.234283   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.234291   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:37.234297   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:37.234351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:37.270765   64287 cri.go:89] found id: ""
	I1009 20:20:37.270790   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.270798   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:37.270803   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:37.270855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:37.303931   64287 cri.go:89] found id: ""
	I1009 20:20:37.303962   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.303970   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:37.303976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:37.304035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:37.339438   64287 cri.go:89] found id: ""
	I1009 20:20:37.339466   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.339476   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:37.339484   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:37.339544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:37.371538   64287 cri.go:89] found id: ""
	I1009 20:20:37.371565   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.371576   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:37.371584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:37.371644   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:37.414729   64287 cri.go:89] found id: ""
	I1009 20:20:37.414775   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.414785   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:37.414803   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:37.414818   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:37.453989   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:37.454013   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:37.504516   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:37.504551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:37.520317   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:37.520353   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:37.590144   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:37.590163   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:37.590175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:38.381407   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.381518   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.897837   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.897916   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.898202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.582744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.167604   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:40.191718   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:40.191788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:40.247439   64287 cri.go:89] found id: ""
	I1009 20:20:40.247467   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.247475   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:40.247482   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:40.247549   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:40.284012   64287 cri.go:89] found id: ""
	I1009 20:20:40.284043   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.284055   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:40.284063   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:40.284124   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:40.321347   64287 cri.go:89] found id: ""
	I1009 20:20:40.321378   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.321386   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:40.321391   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:40.321456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:40.364063   64287 cri.go:89] found id: ""
	I1009 20:20:40.364084   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.364092   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:40.364098   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:40.364152   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:40.400423   64287 cri.go:89] found id: ""
	I1009 20:20:40.400449   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.400458   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:40.400467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:40.400525   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:40.434538   64287 cri.go:89] found id: ""
	I1009 20:20:40.434567   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.434576   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:40.434584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:40.434647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:40.468860   64287 cri.go:89] found id: ""
	I1009 20:20:40.468909   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.468921   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:40.468928   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:40.468990   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:40.501583   64287 cri.go:89] found id: ""
	I1009 20:20:40.501607   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.501615   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:40.501624   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:40.501639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:40.558878   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:40.558919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:40.573191   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:40.573218   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:40.640959   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:40.640980   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:40.640996   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:40.716475   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:40.716510   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.255685   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:43.269113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:43.269182   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:43.305892   64287 cri.go:89] found id: ""
	I1009 20:20:43.305920   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.305931   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:43.305939   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:43.305999   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:43.341486   64287 cri.go:89] found id: ""
	I1009 20:20:43.341515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.341525   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:43.341532   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:43.341592   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:43.375473   64287 cri.go:89] found id: ""
	I1009 20:20:43.375496   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.375506   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:43.375513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:43.375577   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:43.411235   64287 cri.go:89] found id: ""
	I1009 20:20:43.411259   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.411268   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:43.411274   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:43.411330   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:43.444884   64287 cri.go:89] found id: ""
	I1009 20:20:43.444914   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.444926   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:43.444933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:43.444993   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:43.479151   64287 cri.go:89] found id: ""
	I1009 20:20:43.479177   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.479187   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:43.479195   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:43.479261   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:43.512485   64287 cri.go:89] found id: ""
	I1009 20:20:43.512515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.512523   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:43.512530   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:43.512580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:43.546511   64287 cri.go:89] found id: ""
	I1009 20:20:43.546533   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.546541   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:43.546549   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:43.546561   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:43.623938   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:43.623970   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.667655   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:43.667680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:43.724747   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:43.724778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:43.740060   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:43.740081   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:43.820910   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:42.880030   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:44.880596   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.880640   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.399270   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.899013   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.081796   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.580573   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.321796   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:46.337028   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:46.337086   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:46.374564   64287 cri.go:89] found id: ""
	I1009 20:20:46.374587   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.374595   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:46.374601   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:46.374662   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:46.411418   64287 cri.go:89] found id: ""
	I1009 20:20:46.411453   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.411470   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:46.411477   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:46.411535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:46.447726   64287 cri.go:89] found id: ""
	I1009 20:20:46.447750   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.447758   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:46.447763   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:46.447818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:46.484691   64287 cri.go:89] found id: ""
	I1009 20:20:46.484721   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.484731   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:46.484738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:46.484799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:46.525017   64287 cri.go:89] found id: ""
	I1009 20:20:46.525052   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.525064   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:46.525071   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:46.525129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:46.562306   64287 cri.go:89] found id: ""
	I1009 20:20:46.562334   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.562342   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:46.562350   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:46.562417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:46.598067   64287 cri.go:89] found id: ""
	I1009 20:20:46.598099   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.598110   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:46.598117   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:46.598179   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:46.639484   64287 cri.go:89] found id: ""
	I1009 20:20:46.639515   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.639526   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:46.639537   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:46.639551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:46.694106   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:46.694140   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:46.709475   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:46.709501   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:46.781281   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:46.781308   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:46.781322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:46.862224   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:46.862262   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:49.402786   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:49.417432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:49.417537   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:49.454253   64287 cri.go:89] found id: ""
	I1009 20:20:49.454286   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.454296   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:49.454305   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:49.454366   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:49.490198   64287 cri.go:89] found id: ""
	I1009 20:20:49.490223   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.490234   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:49.490241   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:49.490307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:49.524286   64287 cri.go:89] found id: ""
	I1009 20:20:49.524312   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.524322   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:49.524330   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:49.524388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:49.566415   64287 cri.go:89] found id: ""
	I1009 20:20:49.566444   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.566455   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:49.566462   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:49.566529   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:49.604306   64287 cri.go:89] found id: ""
	I1009 20:20:49.604335   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.604346   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:49.604353   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:49.604414   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:48.880756   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:51.381546   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:50.398989   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.399159   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.581256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.081420   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.638514   64287 cri.go:89] found id: ""
	I1009 20:20:49.638543   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.638560   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:49.638568   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:49.638630   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:49.672158   64287 cri.go:89] found id: ""
	I1009 20:20:49.672182   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.672191   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:49.672197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:49.672250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:49.709865   64287 cri.go:89] found id: ""
	I1009 20:20:49.709887   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.709897   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:49.709907   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:49.709919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:49.762184   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:49.762220   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:49.775852   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:49.775880   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:49.850309   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:49.850329   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:49.850343   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:49.930225   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:49.930266   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:52.470580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:52.484087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:52.484141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:52.517440   64287 cri.go:89] found id: ""
	I1009 20:20:52.517461   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.517469   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:52.517475   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:52.517519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:52.550340   64287 cri.go:89] found id: ""
	I1009 20:20:52.550380   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.550392   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:52.550399   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:52.550468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:52.586444   64287 cri.go:89] found id: ""
	I1009 20:20:52.586478   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.586488   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:52.586495   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:52.586551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:52.620461   64287 cri.go:89] found id: ""
	I1009 20:20:52.620488   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.620499   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:52.620506   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:52.620566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:52.656032   64287 cri.go:89] found id: ""
	I1009 20:20:52.656063   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.656074   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:52.656082   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:52.656144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:52.687083   64287 cri.go:89] found id: ""
	I1009 20:20:52.687110   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.687118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:52.687124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:52.687187   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:52.723413   64287 cri.go:89] found id: ""
	I1009 20:20:52.723442   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.723453   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:52.723461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:52.723521   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:52.754656   64287 cri.go:89] found id: ""
	I1009 20:20:52.754687   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.754698   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:52.754709   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:52.754721   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:52.807359   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:52.807398   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:52.821469   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:52.821500   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:52.893447   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:52.893470   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:52.893484   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:52.970051   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:52.970083   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:53.880365   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.881762   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.898472   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:57.397863   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.580495   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:56.581092   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.508078   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:55.521951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:55.522012   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:55.556291   64287 cri.go:89] found id: ""
	I1009 20:20:55.556316   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.556324   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:55.556329   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:55.556380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:55.591032   64287 cri.go:89] found id: ""
	I1009 20:20:55.591059   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.591079   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:55.591086   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:55.591141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:55.636196   64287 cri.go:89] found id: ""
	I1009 20:20:55.636228   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.636239   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:55.636246   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:55.636310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:55.673291   64287 cri.go:89] found id: ""
	I1009 20:20:55.673313   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.673321   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:55.673327   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:55.673374   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:55.709457   64287 cri.go:89] found id: ""
	I1009 20:20:55.709486   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.709497   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:55.709504   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:55.709563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:55.748391   64287 cri.go:89] found id: ""
	I1009 20:20:55.748423   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.748434   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:55.748442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:55.748503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:55.780581   64287 cri.go:89] found id: ""
	I1009 20:20:55.780610   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.780620   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:55.780627   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:55.780688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:55.816489   64287 cri.go:89] found id: ""
	I1009 20:20:55.816527   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.816535   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:55.816554   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:55.816568   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:55.871679   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:55.871708   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:55.887895   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:55.887920   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:55.956814   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:55.956838   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:55.956850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:56.031453   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:56.031489   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.569098   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:58.583558   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:58.583626   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:58.622296   64287 cri.go:89] found id: ""
	I1009 20:20:58.622326   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.622334   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:58.622340   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:58.622401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:58.663776   64287 cri.go:89] found id: ""
	I1009 20:20:58.663798   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.663806   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:58.663812   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:58.663858   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:58.699968   64287 cri.go:89] found id: ""
	I1009 20:20:58.699994   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.700002   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:58.700007   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:58.700066   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:58.733935   64287 cri.go:89] found id: ""
	I1009 20:20:58.733959   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.733968   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:58.733974   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:58.734030   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:58.768723   64287 cri.go:89] found id: ""
	I1009 20:20:58.768752   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.768763   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:58.768771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:58.768834   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:58.803129   64287 cri.go:89] found id: ""
	I1009 20:20:58.803153   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.803161   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:58.803166   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:58.803237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:58.836341   64287 cri.go:89] found id: ""
	I1009 20:20:58.836366   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.836374   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:58.836379   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:58.836437   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:58.872048   64287 cri.go:89] found id: ""
	I1009 20:20:58.872071   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.872081   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:58.872091   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:58.872106   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:58.950133   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:58.950167   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.988529   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:58.988555   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:59.038377   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:59.038414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:59.053398   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:59.053448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:59.120793   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:58.380051   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:00.380182   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:59.398592   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.898382   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:58.581266   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.081525   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.621691   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:01.634505   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:01.634563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:01.670785   64287 cri.go:89] found id: ""
	I1009 20:21:01.670818   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.670826   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:01.670833   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:01.670897   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:01.712219   64287 cri.go:89] found id: ""
	I1009 20:21:01.712243   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.712255   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:01.712261   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:01.712307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:01.747175   64287 cri.go:89] found id: ""
	I1009 20:21:01.747204   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.747215   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:01.747222   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:01.747282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:01.785359   64287 cri.go:89] found id: ""
	I1009 20:21:01.785382   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.785389   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:01.785396   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:01.785452   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:01.822385   64287 cri.go:89] found id: ""
	I1009 20:21:01.822415   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.822426   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:01.822433   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:01.822501   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:01.860839   64287 cri.go:89] found id: ""
	I1009 20:21:01.860871   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.860880   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:01.860889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:01.860935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:01.899191   64287 cri.go:89] found id: ""
	I1009 20:21:01.899215   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.899224   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:01.899232   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:01.899288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:01.936692   64287 cri.go:89] found id: ""
	I1009 20:21:01.936721   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.936729   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:01.936737   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:01.936748   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:02.014848   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:02.014883   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:02.058815   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:02.058846   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:02.110513   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:02.110543   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:02.123855   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:02.123878   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:02.193997   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:02.880277   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.881247   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:07.380330   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.398320   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.580574   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.080382   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.081294   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.694766   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:04.707675   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:04.707743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:04.741322   64287 cri.go:89] found id: ""
	I1009 20:21:04.741354   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.741365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:04.741374   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:04.741435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:04.780649   64287 cri.go:89] found id: ""
	I1009 20:21:04.780676   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.780686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:04.780694   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:04.780749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:04.817514   64287 cri.go:89] found id: ""
	I1009 20:21:04.817545   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.817557   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:04.817564   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:04.817672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:04.850848   64287 cri.go:89] found id: ""
	I1009 20:21:04.850871   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.850878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:04.850885   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:04.850942   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:04.885390   64287 cri.go:89] found id: ""
	I1009 20:21:04.885426   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.885438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:04.885449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:04.885513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:04.920199   64287 cri.go:89] found id: ""
	I1009 20:21:04.920221   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.920229   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:04.920235   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:04.920307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:04.954597   64287 cri.go:89] found id: ""
	I1009 20:21:04.954619   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.954627   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:04.954634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:04.954693   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:04.988236   64287 cri.go:89] found id: ""
	I1009 20:21:04.988262   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.988270   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:04.988278   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:04.988289   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:05.039909   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:05.039939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:05.053556   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:05.053583   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:05.126596   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:05.126618   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:05.126628   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:05.202275   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:05.202309   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:07.740836   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:07.754095   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:07.754165   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:07.786584   64287 cri.go:89] found id: ""
	I1009 20:21:07.786613   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.786621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:07.786627   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:07.786672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:07.822365   64287 cri.go:89] found id: ""
	I1009 20:21:07.822388   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.822396   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:07.822410   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:07.822456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:07.858058   64287 cri.go:89] found id: ""
	I1009 20:21:07.858083   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.858093   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:07.858100   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:07.858156   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:07.894319   64287 cri.go:89] found id: ""
	I1009 20:21:07.894345   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.894352   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:07.894358   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:07.894422   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:07.928620   64287 cri.go:89] found id: ""
	I1009 20:21:07.928648   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.928659   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:07.928667   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:07.928724   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:07.964923   64287 cri.go:89] found id: ""
	I1009 20:21:07.964956   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.964967   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:07.964976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:07.965035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:07.998308   64287 cri.go:89] found id: ""
	I1009 20:21:07.998336   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.998347   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:07.998354   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:07.998402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:08.032021   64287 cri.go:89] found id: ""
	I1009 20:21:08.032047   64287 logs.go:282] 0 containers: []
	W1009 20:21:08.032059   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:08.032070   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:08.032084   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:08.103843   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:08.103867   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:08.103882   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:08.185476   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:08.185507   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:08.226967   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:08.226994   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:08.304852   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:08.304887   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:09.389127   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:11.880856   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.399153   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.399356   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:12.897624   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.581193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:13.082124   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.819345   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:10.832902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:10.832963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:10.873237   64287 cri.go:89] found id: ""
	I1009 20:21:10.873268   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.873279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:10.873286   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:10.873350   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:10.907296   64287 cri.go:89] found id: ""
	I1009 20:21:10.907316   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.907324   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:10.907329   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:10.907377   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:10.946428   64287 cri.go:89] found id: ""
	I1009 20:21:10.946469   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.946481   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:10.946487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:10.946540   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:10.982175   64287 cri.go:89] found id: ""
	I1009 20:21:10.982199   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.982207   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:10.982212   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:10.982259   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:11.016197   64287 cri.go:89] found id: ""
	I1009 20:21:11.016220   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.016243   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:11.016250   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:11.016318   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:11.055697   64287 cri.go:89] found id: ""
	I1009 20:21:11.055723   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.055732   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:11.055740   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:11.055806   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:11.093444   64287 cri.go:89] found id: ""
	I1009 20:21:11.093469   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.093480   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:11.093487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:11.093548   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:11.133224   64287 cri.go:89] found id: ""
	I1009 20:21:11.133252   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.133266   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:11.133276   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:11.133291   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:11.189020   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:11.189057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:11.202652   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:11.202682   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:11.272789   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:11.272811   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:11.272824   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:11.354868   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:11.354904   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:13.896655   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:13.910126   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:13.910189   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:13.944472   64287 cri.go:89] found id: ""
	I1009 20:21:13.944497   64287 logs.go:282] 0 containers: []
	W1009 20:21:13.944505   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:13.944511   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:13.944566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:14.003362   64287 cri.go:89] found id: ""
	I1009 20:21:14.003387   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.003397   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:14.003407   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:14.003470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:14.037691   64287 cri.go:89] found id: ""
	I1009 20:21:14.037717   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.037726   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:14.037732   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:14.037792   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:14.079333   64287 cri.go:89] found id: ""
	I1009 20:21:14.079358   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.079368   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:14.079375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:14.079433   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:14.120821   64287 cri.go:89] found id: ""
	I1009 20:21:14.120843   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.120851   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:14.120857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:14.120904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:14.161089   64287 cri.go:89] found id: ""
	I1009 20:21:14.161118   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.161128   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:14.161135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:14.161193   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:14.201711   64287 cri.go:89] found id: ""
	I1009 20:21:14.201739   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.201748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:14.201756   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:14.201814   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:14.238469   64287 cri.go:89] found id: ""
	I1009 20:21:14.238502   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.238512   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:14.238520   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:14.238531   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:14.289786   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:14.289821   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:14.303876   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:14.303903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:14.376426   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:14.376446   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:14.376459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:14.458058   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:14.458095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:14.381278   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:16.381782   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:14.899834   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.398309   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:15.580946   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.574819   63744 pod_ready.go:82] duration metric: took 4m0.000292386s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:17.574851   63744 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:17.574882   63744 pod_ready.go:39] duration metric: took 4m14.424118915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:17.574914   63744 kubeadm.go:597] duration metric: took 4m22.465328757s to restartPrimaryControlPlane
	W1009 20:21:17.574982   63744 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:17.575016   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:17.000623   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:17.015890   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:17.015963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:17.054136   64287 cri.go:89] found id: ""
	I1009 20:21:17.054166   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.054177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:17.054185   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:17.054242   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:17.089501   64287 cri.go:89] found id: ""
	I1009 20:21:17.089538   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.089548   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:17.089556   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:17.089614   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:17.128042   64287 cri.go:89] found id: ""
	I1009 20:21:17.128066   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.128073   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:17.128079   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:17.128126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:17.164663   64287 cri.go:89] found id: ""
	I1009 20:21:17.164689   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.164697   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:17.164703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:17.164766   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:17.200865   64287 cri.go:89] found id: ""
	I1009 20:21:17.200891   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.200899   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:17.200906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:17.200963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:17.241649   64287 cri.go:89] found id: ""
	I1009 20:21:17.241675   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.241683   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:17.241690   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:17.241749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:17.277390   64287 cri.go:89] found id: ""
	I1009 20:21:17.277424   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.277436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:17.277449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:17.277515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:17.316942   64287 cri.go:89] found id: ""
	I1009 20:21:17.316973   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.316985   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:17.316995   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:17.317015   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:17.360293   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:17.360322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:17.413510   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:17.413546   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:17.427280   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:17.427310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:17.509531   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:17.509551   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:17.509566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:18.880550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.881023   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:19.398723   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:21.899259   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.092463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:20.106101   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:20.106168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:20.147889   64287 cri.go:89] found id: ""
	I1009 20:21:20.147916   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.147925   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:20.147931   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:20.147980   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:20.183097   64287 cri.go:89] found id: ""
	I1009 20:21:20.183167   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.183179   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:20.183185   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:20.183233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:20.217556   64287 cri.go:89] found id: ""
	I1009 20:21:20.217585   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.217596   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:20.217604   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:20.217661   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:20.256692   64287 cri.go:89] found id: ""
	I1009 20:21:20.256717   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.256728   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:20.256735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:20.256797   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:20.290866   64287 cri.go:89] found id: ""
	I1009 20:21:20.290888   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.290896   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:20.290902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:20.290954   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:20.326802   64287 cri.go:89] found id: ""
	I1009 20:21:20.326828   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.326836   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:20.326842   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:20.326901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:20.362395   64287 cri.go:89] found id: ""
	I1009 20:21:20.362426   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.362436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:20.362442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:20.362504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:20.408354   64287 cri.go:89] found id: ""
	I1009 20:21:20.408381   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.408391   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:20.408400   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:20.408415   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:20.426669   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:20.426694   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:20.525895   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:20.525927   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:20.525939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:20.612620   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:20.612654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:20.653152   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:20.653179   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.205516   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:23.218432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:23.218493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:23.254327   64287 cri.go:89] found id: ""
	I1009 20:21:23.254355   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.254365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:23.254372   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:23.254429   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:23.295411   64287 cri.go:89] found id: ""
	I1009 20:21:23.295437   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.295448   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:23.295463   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:23.295523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:23.331631   64287 cri.go:89] found id: ""
	I1009 20:21:23.331661   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.331672   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:23.331679   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:23.331742   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:23.366114   64287 cri.go:89] found id: ""
	I1009 20:21:23.366139   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.366147   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:23.366152   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:23.366200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:23.403549   64287 cri.go:89] found id: ""
	I1009 20:21:23.403580   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.403587   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:23.403593   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:23.403652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:23.439231   64287 cri.go:89] found id: ""
	I1009 20:21:23.439254   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.439263   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:23.439268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:23.439322   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:23.473417   64287 cri.go:89] found id: ""
	I1009 20:21:23.473441   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.473449   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:23.473455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:23.473503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:23.506129   64287 cri.go:89] found id: ""
	I1009 20:21:23.506151   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.506159   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:23.506166   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:23.506176   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:23.546813   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:23.546836   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.599317   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:23.599346   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:23.612400   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:23.612426   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:23.684905   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:23.684924   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:23.684936   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:22.881084   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:25.380780   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:27.380875   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:23.899699   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.401044   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.267079   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:26.282873   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:26.282946   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:26.319632   64287 cri.go:89] found id: ""
	I1009 20:21:26.319657   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.319665   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:26.319671   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:26.319716   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:26.362263   64287 cri.go:89] found id: ""
	I1009 20:21:26.362290   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.362299   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:26.362306   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:26.362401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:26.412274   64287 cri.go:89] found id: ""
	I1009 20:21:26.412309   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.412320   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:26.412332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:26.412391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:26.446754   64287 cri.go:89] found id: ""
	I1009 20:21:26.446774   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.446783   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:26.446788   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:26.446838   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:26.480333   64287 cri.go:89] found id: ""
	I1009 20:21:26.480359   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.480367   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:26.480375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:26.480438   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:26.518440   64287 cri.go:89] found id: ""
	I1009 20:21:26.518469   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.518479   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:26.518486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:26.518555   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:26.555100   64287 cri.go:89] found id: ""
	I1009 20:21:26.555127   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.555138   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:26.555146   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:26.555208   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:26.594515   64287 cri.go:89] found id: ""
	I1009 20:21:26.594538   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.594550   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:26.594559   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:26.594573   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:26.647465   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:26.647511   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:26.661021   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:26.661042   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:26.732233   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:26.732265   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:26.732286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:26.813104   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:26.813143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:29.361485   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:29.374578   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:29.374647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:29.409740   64287 cri.go:89] found id: ""
	I1009 20:21:29.409766   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.409774   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:29.409781   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:29.409826   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:29.443932   64287 cri.go:89] found id: ""
	I1009 20:21:29.443959   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.443970   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:29.443978   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:29.444070   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:29.485900   64287 cri.go:89] found id: ""
	I1009 20:21:29.485927   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.485935   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:29.485940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:29.485994   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:29.527976   64287 cri.go:89] found id: ""
	I1009 20:21:29.528002   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.528013   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:29.528021   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:29.528080   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:29.572186   64287 cri.go:89] found id: ""
	I1009 20:21:29.572214   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.572235   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:29.572243   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:29.572310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:29.612166   64287 cri.go:89] found id: ""
	I1009 20:21:29.612190   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.612200   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:29.612208   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:29.612267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:29.880828   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:32.380494   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:28.897535   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:31.398369   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:29.646269   64287 cri.go:89] found id: ""
	I1009 20:21:29.646294   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.646312   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:29.646319   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:29.646375   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:29.680624   64287 cri.go:89] found id: ""
	I1009 20:21:29.680649   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.680656   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:29.680663   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:29.680673   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:29.729251   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:29.729278   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:29.742746   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:29.742773   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:29.815128   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:29.815150   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:29.815164   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:29.893418   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:29.893448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.433532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:32.447090   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:32.447161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:32.482662   64287 cri.go:89] found id: ""
	I1009 20:21:32.482688   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.482696   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:32.482702   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:32.482755   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:32.521292   64287 cri.go:89] found id: ""
	I1009 20:21:32.521321   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.521329   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:32.521337   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:32.521393   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:32.555868   64287 cri.go:89] found id: ""
	I1009 20:21:32.555894   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.555901   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:32.555906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:32.555956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:32.593541   64287 cri.go:89] found id: ""
	I1009 20:21:32.593563   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.593570   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:32.593575   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:32.593632   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:32.627712   64287 cri.go:89] found id: ""
	I1009 20:21:32.627740   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.627751   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:32.627758   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:32.627816   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:32.660632   64287 cri.go:89] found id: ""
	I1009 20:21:32.660658   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.660669   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:32.660677   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:32.660733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:32.697709   64287 cri.go:89] found id: ""
	I1009 20:21:32.697737   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.697748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:32.697755   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:32.697810   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:32.734782   64287 cri.go:89] found id: ""
	I1009 20:21:32.734806   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.734816   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:32.734827   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:32.734840   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:32.809239   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:32.809271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.857109   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:32.857143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:32.915156   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:32.915185   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:32.929782   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:32.929813   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:32.996321   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:34.380798   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:36.880717   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:33.399188   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.899631   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.497013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:35.510645   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:35.510714   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:35.543840   64287 cri.go:89] found id: ""
	I1009 20:21:35.543869   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.543878   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:35.543883   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:35.543929   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:35.579206   64287 cri.go:89] found id: ""
	I1009 20:21:35.579235   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.579246   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:35.579254   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:35.579312   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:35.613362   64287 cri.go:89] found id: ""
	I1009 20:21:35.613393   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.613406   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:35.613414   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:35.613484   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:35.649553   64287 cri.go:89] found id: ""
	I1009 20:21:35.649584   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.649596   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:35.649605   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:35.649672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:35.688665   64287 cri.go:89] found id: ""
	I1009 20:21:35.688695   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.688706   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:35.688714   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:35.688771   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:35.725958   64287 cri.go:89] found id: ""
	I1009 20:21:35.725979   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.725987   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:35.725993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:35.726047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:35.758368   64287 cri.go:89] found id: ""
	I1009 20:21:35.758395   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.758405   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:35.758410   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:35.758455   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:35.790323   64287 cri.go:89] found id: ""
	I1009 20:21:35.790347   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.790357   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:35.790367   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:35.790380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:35.843721   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:35.843752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:35.858894   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:35.858915   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:35.934242   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:35.934261   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:35.934273   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:36.016029   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:36.016062   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.554219   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:38.567266   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:38.567339   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:38.606292   64287 cri.go:89] found id: ""
	I1009 20:21:38.606328   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.606338   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:38.606344   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:38.606396   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:38.638807   64287 cri.go:89] found id: ""
	I1009 20:21:38.638831   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.638841   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:38.638849   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:38.638907   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:38.677635   64287 cri.go:89] found id: ""
	I1009 20:21:38.677665   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.677674   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:38.677682   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:38.677740   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:38.714847   64287 cri.go:89] found id: ""
	I1009 20:21:38.714870   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.714878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:38.714886   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:38.714944   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:38.746460   64287 cri.go:89] found id: ""
	I1009 20:21:38.746487   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.746495   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:38.746501   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:38.746554   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:38.782027   64287 cri.go:89] found id: ""
	I1009 20:21:38.782055   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.782066   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:38.782073   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:38.782130   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:38.816859   64287 cri.go:89] found id: ""
	I1009 20:21:38.816885   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.816893   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:38.816899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:38.816961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:38.857159   64287 cri.go:89] found id: ""
	I1009 20:21:38.857195   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.857204   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:38.857212   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:38.857224   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:38.913209   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:38.913240   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:38.927593   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:38.927617   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:38.998178   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:38.998213   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:38.998226   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:39.080681   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:39.080716   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.882054   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.874981   64109 pod_ready.go:82] duration metric: took 4m0.000684397s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:40.875008   64109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:40.875024   64109 pod_ready.go:39] duration metric: took 4m13.532570346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:40.875056   64109 kubeadm.go:597] duration metric: took 4m22.188345085s to restartPrimaryControlPlane
	W1009 20:21:40.875130   64109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:40.875162   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:38.397606   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.398216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:42.398390   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:41.620092   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:41.633491   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:41.633564   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:41.671087   64287 cri.go:89] found id: ""
	I1009 20:21:41.671114   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.671123   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:41.671128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:41.671184   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:41.706940   64287 cri.go:89] found id: ""
	I1009 20:21:41.706966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.706976   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:41.706984   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:41.707036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:41.745612   64287 cri.go:89] found id: ""
	I1009 20:21:41.745637   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.745646   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:41.745651   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:41.745706   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:41.786857   64287 cri.go:89] found id: ""
	I1009 20:21:41.786884   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.786895   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:41.786904   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:41.786958   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:41.825005   64287 cri.go:89] found id: ""
	I1009 20:21:41.825030   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.825041   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:41.825053   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:41.825100   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:41.863089   64287 cri.go:89] found id: ""
	I1009 20:21:41.863111   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.863118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:41.863124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:41.863169   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:41.907937   64287 cri.go:89] found id: ""
	I1009 20:21:41.907966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.907980   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:41.907988   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:41.908047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:41.948189   64287 cri.go:89] found id: ""
	I1009 20:21:41.948219   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.948229   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:41.948243   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:41.948257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:41.993008   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:41.993038   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:42.045831   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:42.045864   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:42.060255   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:42.060280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:42.127657   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:42.127680   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:42.127696   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:44.398696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:46.399642   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:43.855161   63744 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.280119061s)
	I1009 20:21:43.855245   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:43.871587   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:43.881677   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:43.891625   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:43.891646   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:43.891689   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:43.901651   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:43.901705   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:43.911179   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:43.920389   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:43.920436   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:43.929812   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.938937   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:43.938989   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.948454   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:43.958881   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:43.958924   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:43.970036   63744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:44.024453   63744 kubeadm.go:310] W1009 20:21:44.000704    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.025829   63744 kubeadm.go:310] W1009 20:21:44.002227    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.142191   63744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:44.713209   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:44.725754   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:44.725825   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:44.760976   64287 cri.go:89] found id: ""
	I1009 20:21:44.760997   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.761004   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:44.761011   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:44.761053   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:44.796955   64287 cri.go:89] found id: ""
	I1009 20:21:44.796977   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.796985   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:44.796991   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:44.797036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:44.832558   64287 cri.go:89] found id: ""
	I1009 20:21:44.832590   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.832601   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:44.832608   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:44.832667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:44.867869   64287 cri.go:89] found id: ""
	I1009 20:21:44.867898   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.867908   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:44.867916   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:44.867966   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:44.901395   64287 cri.go:89] found id: ""
	I1009 20:21:44.901423   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.901434   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:44.901442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:44.901505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:44.939276   64287 cri.go:89] found id: ""
	I1009 20:21:44.939310   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.939323   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:44.939337   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:44.939399   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:44.973692   64287 cri.go:89] found id: ""
	I1009 20:21:44.973719   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.973728   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:44.973734   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:44.973782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:45.007406   64287 cri.go:89] found id: ""
	I1009 20:21:45.007436   64287 logs.go:282] 0 containers: []
	W1009 20:21:45.007446   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:45.007457   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:45.007472   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:45.062199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:45.062233   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:45.075739   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:45.075763   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:45.147623   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:45.147639   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:45.147654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:45.229252   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:45.229286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:47.777208   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:47.794054   64287 kubeadm.go:597] duration metric: took 4m2.743382732s to restartPrimaryControlPlane
	W1009 20:21:47.794132   64287 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:47.794159   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:48.789863   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:48.804981   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:48.815981   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:48.826318   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:48.826340   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:48.826390   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:48.838918   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:48.838976   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:48.851635   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:48.864173   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:48.864237   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:48.874606   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.885036   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:48.885097   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.894870   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:48.904993   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:48.905040   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:48.915393   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:49.145081   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:52.033314   63744 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:21:52.033383   63744 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:21:52.033489   63744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:21:52.033625   63744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:21:52.033705   63744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:21:52.033799   63744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:21:52.035555   63744 out.go:235]   - Generating certificates and keys ...
	I1009 20:21:52.035638   63744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:21:52.035737   63744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:21:52.035861   63744 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:21:52.035951   63744 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:21:52.036043   63744 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:21:52.036135   63744 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:21:52.036233   63744 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:21:52.036325   63744 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:21:52.036431   63744 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:21:52.036584   63744 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:21:52.036656   63744 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:21:52.036737   63744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:21:52.036831   63744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:21:52.036914   63744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:21:52.036985   63744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:21:52.037077   63744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:21:52.037157   63744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:21:52.037280   63744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:21:52.037372   63744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:21:52.038777   63744 out.go:235]   - Booting up control plane ...
	I1009 20:21:52.038872   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:21:52.038995   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:21:52.039101   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:21:52.039242   63744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:21:52.039338   63744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:21:52.039393   63744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:21:52.039593   63744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:21:52.039746   63744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:21:52.039813   63744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005827851s
	I1009 20:21:52.039917   63744 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:21:52.039996   63744 kubeadm.go:310] [api-check] The API server is healthy after 4.502512954s
	I1009 20:21:52.040127   63744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:21:52.040319   63744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:21:52.040402   63744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:21:52.040606   63744 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-503330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:21:52.040684   63744 kubeadm.go:310] [bootstrap-token] Using token: 69fwjj.t1glswhsta5w4zx2
	I1009 20:21:52.042352   63744 out.go:235]   - Configuring RBAC rules ...
	I1009 20:21:52.042456   63744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:21:52.042526   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:21:52.042664   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:21:52.042773   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:21:52.042868   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:21:52.042948   63744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:21:52.043119   63744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:21:52.043184   63744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:21:52.043250   63744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:21:52.043258   63744 kubeadm.go:310] 
	I1009 20:21:52.043360   63744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:21:52.043377   63744 kubeadm.go:310] 
	I1009 20:21:52.043504   63744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:21:52.043516   63744 kubeadm.go:310] 
	I1009 20:21:52.043554   63744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:21:52.043639   63744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:21:52.043711   63744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:21:52.043721   63744 kubeadm.go:310] 
	I1009 20:21:52.043792   63744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:21:52.043800   63744 kubeadm.go:310] 
	I1009 20:21:52.043838   63744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:21:52.043844   63744 kubeadm.go:310] 
	I1009 20:21:52.043909   63744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:21:52.044021   63744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:21:52.044108   63744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:21:52.044117   63744 kubeadm.go:310] 
	I1009 20:21:52.044225   63744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:21:52.044350   63744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:21:52.044365   63744 kubeadm.go:310] 
	I1009 20:21:52.044462   63744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044591   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:21:52.044619   63744 kubeadm.go:310] 	--control-plane 
	I1009 20:21:52.044624   63744 kubeadm.go:310] 
	I1009 20:21:52.044732   63744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:21:52.044739   63744 kubeadm.go:310] 
	I1009 20:21:52.044842   63744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044956   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:21:52.044967   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:21:52.044973   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:21:52.047342   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:21:48.899752   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:51.398734   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:52.048508   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:21:52.060338   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:21:52.079526   63744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:21:52.079580   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.079669   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-503330 minikube.k8s.io/updated_at=2024_10_09T20_21_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=embed-certs-503330 minikube.k8s.io/primary=true
	I1009 20:21:52.296281   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.296296   63744 ops.go:34] apiserver oom_adj: -16
	I1009 20:21:52.796429   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.296570   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.797269   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.297261   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.797049   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.297194   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.796896   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.296658   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.796494   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.904248   63744 kubeadm.go:1113] duration metric: took 4.824720684s to wait for elevateKubeSystemPrivileges
	I1009 20:21:56.904284   63744 kubeadm.go:394] duration metric: took 5m1.847540023s to StartCluster
	I1009 20:21:56.904302   63744 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.904390   63744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:21:56.906918   63744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.907263   63744 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:56.907349   63744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:56.907451   63744 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-503330"
	I1009 20:21:56.907487   63744 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-503330"
	I1009 20:21:56.907486   63744 addons.go:69] Setting default-storageclass=true in profile "embed-certs-503330"
	W1009 20:21:56.907496   63744 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:21:56.907502   63744 addons.go:69] Setting metrics-server=true in profile "embed-certs-503330"
	I1009 20:21:56.907527   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907540   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:21:56.907529   63744 addons.go:234] Setting addon metrics-server=true in "embed-certs-503330"
	W1009 20:21:56.907616   63744 addons.go:243] addon metrics-server should already be in state true
	I1009 20:21:56.907642   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907508   63744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-503330"
	I1009 20:21:56.907976   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908018   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908038   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908061   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908072   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908105   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.909166   63744 out.go:177] * Verifying Kubernetes components...
	I1009 20:21:56.910945   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:56.924607   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1009 20:21:56.925089   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.925624   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.925643   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.926009   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.926194   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.927999   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1009 20:21:56.928182   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1009 20:21:56.929496   63744 addons.go:234] Setting addon default-storageclass=true in "embed-certs-503330"
	W1009 20:21:56.929513   63744 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:21:56.929533   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.929779   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.929804   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.930111   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930148   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930590   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930607   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930727   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930742   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930950   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931022   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931541   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.931583   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.932246   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.932292   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.945160   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I1009 20:21:56.945657   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.946102   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.946128   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.946469   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.947002   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.947044   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.951951   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I1009 20:21:56.952409   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.952851   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1009 20:21:56.953051   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953068   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.953331   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.953407   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.953561   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.953830   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953854   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.954204   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.954381   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.956314   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.956515   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.958947   63744 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:21:56.959026   63744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:53.898455   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:55.898680   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:57.899675   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:56.961002   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:21:56.961019   63744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:21:56.961036   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.961188   63744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.961206   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:56.961219   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.964087   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1009 20:21:56.964490   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.964644   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965040   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965298   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965511   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965539   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965577   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965600   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965876   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.965901   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.965901   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965958   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966041   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966083   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.966324   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.967052   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.967288   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.968690   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.968865   63744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.968880   63744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:56.968902   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.971293   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971661   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.971682   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971807   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.971975   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.972115   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.972249   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:57.140847   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:57.160702   63744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172751   63744 node_ready.go:49] node "embed-certs-503330" has status "Ready":"True"
	I1009 20:21:57.172781   63744 node_ready.go:38] duration metric: took 12.05112ms for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172794   63744 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:57.181089   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:21:57.242001   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:57.263153   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:21:57.263173   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:21:57.302934   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:21:57.302962   63744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:21:57.335796   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.335822   63744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:21:57.361537   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.418449   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:57.903919   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.903945   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904232   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904252   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:57.904261   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.904269   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904289   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:57.904560   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904578   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131399   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131433   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131434   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131451   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131717   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131742   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131750   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131762   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131792   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131796   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131847   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131861   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131869   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131972   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131986   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133342   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.133353   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.133363   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133372   63744 addons.go:475] Verifying addon metrics-server=true in "embed-certs-503330"
	I1009 20:21:58.148066   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.148090   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.148302   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.148304   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.148331   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.149874   63744 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1009 20:21:58.151249   63744 addons.go:510] duration metric: took 1.243909023s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1009 20:22:00.398702   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:02.898157   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:59.187137   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:01.686294   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:03.687302   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:04.187813   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:04.187838   63744 pod_ready.go:82] duration metric: took 7.006724226s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:04.187847   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693964   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.693989   63744 pod_ready.go:82] duration metric: took 1.506136012s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693999   63744 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698244   63744 pod_ready.go:93] pod "etcd-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.698263   63744 pod_ready.go:82] duration metric: took 4.258915ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698272   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702503   63744 pod_ready.go:93] pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.702523   63744 pod_ready.go:82] duration metric: took 4.24469ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702534   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706794   63744 pod_ready.go:93] pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.706814   63744 pod_ready.go:82] duration metric: took 4.272023ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706824   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785041   63744 pod_ready.go:93] pod "kube-proxy-k4sqz" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.785063   63744 pod_ready.go:82] duration metric: took 78.232276ms for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785072   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185082   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:06.185107   63744 pod_ready.go:82] duration metric: took 400.026614ms for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185118   63744 pod_ready.go:39] duration metric: took 9.012311475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:06.185134   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:06.185190   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:06.200274   63744 api_server.go:72] duration metric: took 9.292974134s to wait for apiserver process to appear ...
	I1009 20:22:06.200300   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:06.200319   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:22:06.204606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:22:06.205489   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:06.205507   63744 api_server.go:131] duration metric: took 5.200899ms to wait for apiserver health ...
	I1009 20:22:06.205515   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:06.387526   63744 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:06.387560   63744 system_pods.go:61] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.387566   63744 system_pods.go:61] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.387569   63744 system_pods.go:61] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.387572   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.387576   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.387580   63744 system_pods.go:61] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.387584   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.387589   63744 system_pods.go:61] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.387595   63744 system_pods.go:61] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.387604   63744 system_pods.go:74] duration metric: took 182.083801ms to wait for pod list to return data ...
	I1009 20:22:06.387614   63744 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:06.585261   63744 default_sa.go:45] found service account: "default"
	I1009 20:22:06.585283   63744 default_sa.go:55] duration metric: took 197.662514ms for default service account to be created ...
	I1009 20:22:06.585292   63744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:06.788380   63744 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:06.788405   63744 system_pods.go:89] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.788410   63744 system_pods.go:89] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.788414   63744 system_pods.go:89] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.788418   63744 system_pods.go:89] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.788421   63744 system_pods.go:89] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.788425   63744 system_pods.go:89] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.788428   63744 system_pods.go:89] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.788433   63744 system_pods.go:89] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.788437   63744 system_pods.go:89] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.788445   63744 system_pods.go:126] duration metric: took 203.147541ms to wait for k8s-apps to be running ...
	I1009 20:22:06.788454   63744 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:06.788493   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:06.808681   63744 system_svc.go:56] duration metric: took 20.217422ms WaitForService to wait for kubelet
	I1009 20:22:06.808710   63744 kubeadm.go:582] duration metric: took 9.901411942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:06.808733   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:06.984902   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:06.984932   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:06.984945   63744 node_conditions.go:105] duration metric: took 176.206313ms to run NodePressure ...
	I1009 20:22:06.984958   63744 start.go:241] waiting for startup goroutines ...
	I1009 20:22:06.984968   63744 start.go:246] waiting for cluster config update ...
	I1009 20:22:06.984981   63744 start.go:255] writing updated cluster config ...
	I1009 20:22:06.985286   63744 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:07.038935   63744 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:07.040555   63744 out.go:177] * Done! kubectl is now configured to use "embed-certs-503330" cluster and "default" namespace by default
	I1009 20:22:07.095426   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.220236459s)
	I1009 20:22:07.095500   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:07.112458   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:22:07.126942   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:22:07.140284   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:22:07.140304   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:22:07.140349   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:22:07.150051   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:22:07.150089   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:22:07.159508   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:22:07.169670   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:22:07.169724   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:22:07.179378   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.189534   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:22:07.189590   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.198752   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:22:07.207878   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:22:07.207922   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:22:07.217131   64109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:22:07.272837   64109 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:22:07.272983   64109 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:22:07.390966   64109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:22:07.391157   64109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:22:07.391298   64109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:22:07.402064   64109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:22:07.404170   64109 out.go:235]   - Generating certificates and keys ...
	I1009 20:22:07.404277   64109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:22:07.404377   64109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:22:07.404500   64109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:22:07.404594   64109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:22:07.404709   64109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:22:07.404798   64109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:22:07.404891   64109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:22:07.404980   64109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:22:07.405087   64109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:22:07.405184   64109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:22:07.405257   64109 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:22:07.405339   64109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:22:04.898623   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:06.899217   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:07.573252   64109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:22:07.929073   64109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:22:08.151802   64109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:22:08.220927   64109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:22:08.351546   64109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:22:08.352048   64109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:22:08.354486   64109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:22:08.356298   64109 out.go:235]   - Booting up control plane ...
	I1009 20:22:08.356416   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:22:08.356497   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:22:08.356564   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:22:08.376381   64109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:22:08.383479   64109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:22:08.383861   64109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:22:08.515158   64109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:22:08.515282   64109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:22:09.516371   64109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001248976s
	I1009 20:22:09.516460   64109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:22:09.398667   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:11.898547   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:14.518560   64109 kubeadm.go:310] [api-check] The API server is healthy after 5.002267352s
	I1009 20:22:14.535812   64109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:22:14.551918   64109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:22:14.575035   64109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:22:14.575281   64109 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-733270 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:22:14.589604   64109 kubeadm.go:310] [bootstrap-token] Using token: q60nq5.9zsgiaeid5aito18
	I1009 20:22:14.590971   64109 out.go:235]   - Configuring RBAC rules ...
	I1009 20:22:14.591128   64109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:22:14.597327   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:22:14.605584   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:22:14.608650   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:22:14.614771   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:22:14.618089   64109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:22:14.929271   64109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:22:15.378546   64109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:22:15.929242   64109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:22:15.930222   64109 kubeadm.go:310] 
	I1009 20:22:15.930305   64109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:22:15.930314   64109 kubeadm.go:310] 
	I1009 20:22:15.930395   64109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:22:15.930423   64109 kubeadm.go:310] 
	I1009 20:22:15.930468   64109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:22:15.930569   64109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:22:15.930635   64109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:22:15.930643   64109 kubeadm.go:310] 
	I1009 20:22:15.930711   64109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:22:15.930718   64109 kubeadm.go:310] 
	I1009 20:22:15.930758   64109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:22:15.930764   64109 kubeadm.go:310] 
	I1009 20:22:15.930807   64109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:22:15.930874   64109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:22:15.930933   64109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:22:15.930939   64109 kubeadm.go:310] 
	I1009 20:22:15.931013   64109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:22:15.931138   64109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:22:15.931150   64109 kubeadm.go:310] 
	I1009 20:22:15.931258   64109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931411   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:22:15.931450   64109 kubeadm.go:310] 	--control-plane 
	I1009 20:22:15.931460   64109 kubeadm.go:310] 
	I1009 20:22:15.931560   64109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:22:15.931569   64109 kubeadm.go:310] 
	I1009 20:22:15.931668   64109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931824   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:22:15.933191   64109 kubeadm.go:310] W1009 20:22:07.220393    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933602   64109 kubeadm.go:310] W1009 20:22:07.223065    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933757   64109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:22:15.933786   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:22:15.933800   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:22:15.935449   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:22:15.936759   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:22:15.947648   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:22:15.966343   64109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:22:15.966422   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:15.966483   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-733270 minikube.k8s.io/updated_at=2024_10_09T20_22_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=default-k8s-diff-port-733270 minikube.k8s.io/primary=true
	I1009 20:22:16.186232   64109 ops.go:34] apiserver oom_adj: -16
	I1009 20:22:16.186379   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:16.686824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:17.187316   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:14.398119   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:16.399791   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:17.687381   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.186824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.687500   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.187331   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.687194   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.767575   64109 kubeadm.go:1113] duration metric: took 3.801217416s to wait for elevateKubeSystemPrivileges
	I1009 20:22:19.767611   64109 kubeadm.go:394] duration metric: took 5m1.132732036s to StartCluster
	I1009 20:22:19.767631   64109 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.767719   64109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:22:19.769461   64109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.769695   64109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:22:19.769758   64109 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:22:19.769856   64109 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769884   64109 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-733270"
	I1009 20:22:19.769881   64109 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769894   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:22:19.769908   64109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733270"
	W1009 20:22:19.769897   64109 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:22:19.769970   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.769892   64109 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.770056   64109 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.770069   64109 addons.go:243] addon metrics-server should already be in state true
	I1009 20:22:19.770116   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.770324   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770356   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770364   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770392   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770486   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770522   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.771540   64109 out.go:177] * Verifying Kubernetes components...
	I1009 20:22:19.772979   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:22:19.785692   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I1009 20:22:19.785792   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I1009 20:22:19.786095   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786204   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786608   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786629   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786759   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786776   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786948   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.787422   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.787449   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.787843   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.788015   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.788974   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34217
	I1009 20:22:19.789282   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.789751   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.789772   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.791379   64109 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.791400   64109 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:22:19.791428   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.791601   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.791796   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.791834   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.792113   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.792147   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.806661   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1009 20:22:19.807178   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1009 20:22:19.807283   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807700   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807966   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.807989   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808200   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.808223   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808407   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.808629   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808811   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.810504   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810671   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1009 20:22:19.811047   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.811579   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.811602   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.811962   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.812375   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.812404   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.812666   64109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:22:19.812673   64109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:22:19.814145   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:22:19.814160   64109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:22:19.814173   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.814293   64109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:19.814308   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:22:19.814324   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.817244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818718   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.818744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818881   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.818956   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819037   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819240   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.819401   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.819677   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.819697   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.819713   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819831   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819990   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.820176   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.831920   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1009 20:22:19.832278   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.832725   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.832757   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.833093   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.833271   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.834841   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.835042   64109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:19.835074   64109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:22:19.835094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.837916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.838651   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838759   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.838927   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.839075   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.839216   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.968622   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:22:19.988987   64109 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005886   64109 node_ready.go:49] node "default-k8s-diff-port-733270" has status "Ready":"True"
	I1009 20:22:20.005909   64109 node_ready.go:38] duration metric: took 16.891882ms for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005920   64109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:20.015076   64109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:20.072480   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:22:20.072517   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:22:20.089167   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:20.101256   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:20.128261   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:22:20.128310   64109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:22:20.166749   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.166772   64109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:22:20.250822   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.802064   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802142   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802449   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802462   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802465   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802471   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802479   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802482   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802490   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802503   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.804339   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804345   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804381   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.804403   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804413   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804426   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.820127   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.820148   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.820509   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.820526   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.820558   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.348946   64109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.098079149s)
	I1009 20:22:21.349009   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349024   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349347   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349396   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349404   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349420   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349428   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349689   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349748   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349774   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349788   64109 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-733270"
	I1009 20:22:21.351765   64109 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1009 20:22:21.352876   64109 addons.go:510] duration metric: took 1.58312679s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1009 20:22:22.021876   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:18.401861   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:20.899295   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:24.521853   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.021730   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:23.399283   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:25.897649   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.897899   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:28.021952   64109 pod_ready.go:93] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.021974   64109 pod_ready.go:82] duration metric: took 8.006873591s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.021983   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026148   64109 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.026167   64109 pod_ready.go:82] duration metric: took 4.178272ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026176   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029955   64109 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.029976   64109 pod_ready.go:82] duration metric: took 3.792606ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029986   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033674   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.033690   64109 pod_ready.go:82] duration metric: took 3.698391ms for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033697   64109 pod_ready.go:39] duration metric: took 8.027766695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:28.033709   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:28.033754   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:28.057802   64109 api_server.go:72] duration metric: took 8.288077751s to wait for apiserver process to appear ...
	I1009 20:22:28.057830   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:28.057850   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:22:28.069876   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:22:28.071652   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:28.071676   64109 api_server.go:131] duration metric: took 13.838153ms to wait for apiserver health ...
	I1009 20:22:28.071684   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:28.083482   64109 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:28.083504   64109 system_pods.go:61] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.083509   64109 system_pods.go:61] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.083513   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.083516   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.083520   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.083523   64109 system_pods.go:61] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.083526   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.083531   64109 system_pods.go:61] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.083535   64109 system_pods.go:61] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.083542   64109 system_pods.go:74] duration metric: took 11.853134ms to wait for pod list to return data ...
	I1009 20:22:28.083548   64109 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:28.086146   64109 default_sa.go:45] found service account: "default"
	I1009 20:22:28.086165   64109 default_sa.go:55] duration metric: took 2.611433ms for default service account to be created ...
	I1009 20:22:28.086173   64109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:28.223233   64109 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:28.223260   64109 system_pods.go:89] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.223266   64109 system_pods.go:89] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.223270   64109 system_pods.go:89] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.223274   64109 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.223278   64109 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.223281   64109 system_pods.go:89] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.223285   64109 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.223291   64109 system_pods.go:89] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.223295   64109 system_pods.go:89] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.223303   64109 system_pods.go:126] duration metric: took 137.124429ms to wait for k8s-apps to be running ...
	I1009 20:22:28.223310   64109 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:28.223352   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:28.239300   64109 system_svc.go:56] duration metric: took 15.983195ms WaitForService to wait for kubelet
	I1009 20:22:28.239324   64109 kubeadm.go:582] duration metric: took 8.469605426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:28.239341   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:28.419917   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:28.419940   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:28.419951   64109 node_conditions.go:105] duration metric: took 180.606696ms to run NodePressure ...
	I1009 20:22:28.419962   64109 start.go:241] waiting for startup goroutines ...
	I1009 20:22:28.419969   64109 start.go:246] waiting for cluster config update ...
	I1009 20:22:28.419978   64109 start.go:255] writing updated cluster config ...
	I1009 20:22:28.420224   64109 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:28.467253   64109 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:28.469239   64109 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-733270" cluster and "default" namespace by default
	I1009 20:22:29.898528   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:31.897863   63427 pod_ready.go:82] duration metric: took 4m0.005763954s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:22:31.897884   63427 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 20:22:31.897892   63427 pod_ready.go:39] duration metric: took 4m2.806165062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:31.897906   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:31.897930   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:31.897972   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:31.945643   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:31.945667   63427 cri.go:89] found id: ""
	I1009 20:22:31.945677   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:31.945720   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.949923   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:31.950018   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:31.989365   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:31.989391   63427 cri.go:89] found id: ""
	I1009 20:22:31.989401   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:31.989451   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.993865   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:31.993926   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:32.030658   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.030678   63427 cri.go:89] found id: ""
	I1009 20:22:32.030685   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:32.030731   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.034587   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:32.034647   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:32.078482   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.078508   63427 cri.go:89] found id: ""
	I1009 20:22:32.078516   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:32.078570   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.082565   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:32.082626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:32.118355   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.118379   63427 cri.go:89] found id: ""
	I1009 20:22:32.118388   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:32.118444   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.123110   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:32.123170   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:32.163052   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.163077   63427 cri.go:89] found id: ""
	I1009 20:22:32.163085   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:32.163137   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.167085   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:32.167146   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:32.201126   63427 cri.go:89] found id: ""
	I1009 20:22:32.201149   63427 logs.go:282] 0 containers: []
	W1009 20:22:32.201156   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:32.201161   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:32.201217   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:32.242235   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.242259   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.242265   63427 cri.go:89] found id: ""
	I1009 20:22:32.242274   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:32.242337   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.247127   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.250692   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:32.250712   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.301343   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:32.301368   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:32.347256   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:32.347283   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:32.485223   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:32.485263   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.530013   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:32.530054   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:32.580422   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:32.580447   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:32.625202   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:32.625237   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.664203   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:32.664230   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.701753   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:32.701782   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.741584   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:32.741610   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.779976   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:32.780003   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:32.848844   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:32.848875   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:32.871387   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:32.871416   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:35.836255   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:35.853510   63427 api_server.go:72] duration metric: took 4m14.501873287s to wait for apiserver process to appear ...
	I1009 20:22:35.853541   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:35.853583   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:35.853626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:35.889199   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:35.889228   63427 cri.go:89] found id: ""
	I1009 20:22:35.889237   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:35.889299   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.893644   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:35.893706   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:35.934151   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:35.934178   63427 cri.go:89] found id: ""
	I1009 20:22:35.934188   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:35.934244   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.938561   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:35.938618   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:35.974555   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:35.974579   63427 cri.go:89] found id: ""
	I1009 20:22:35.974588   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:35.974639   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.978468   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:35.978514   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:36.014292   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.014316   63427 cri.go:89] found id: ""
	I1009 20:22:36.014324   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:36.014366   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.018618   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:36.018672   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:36.059334   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.059366   63427 cri.go:89] found id: ""
	I1009 20:22:36.059377   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:36.059436   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.063552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:36.063612   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:36.098384   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.098404   63427 cri.go:89] found id: ""
	I1009 20:22:36.098413   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:36.098464   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.102428   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:36.102490   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:36.140422   63427 cri.go:89] found id: ""
	I1009 20:22:36.140451   63427 logs.go:282] 0 containers: []
	W1009 20:22:36.140461   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:36.140467   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:36.140524   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:36.178576   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.178600   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.178604   63427 cri.go:89] found id: ""
	I1009 20:22:36.178610   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:36.178662   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.183208   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.186971   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:36.186994   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.222365   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:36.222389   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:36.652499   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:36.652533   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:36.700493   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:36.700523   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:36.715630   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:36.715657   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:36.757738   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:36.757766   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:36.793469   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:36.793491   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.833374   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:36.833400   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.894545   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:36.894579   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.932407   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:36.932441   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.969165   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:36.969198   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:37.039100   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:37.039138   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:37.141855   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:37.141889   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.701118   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:22:39.705369   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:22:39.706731   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:39.706750   63427 api_server.go:131] duration metric: took 3.853202912s to wait for apiserver health ...
	I1009 20:22:39.706757   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:39.706777   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:39.706821   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:39.745203   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.745227   63427 cri.go:89] found id: ""
	I1009 20:22:39.745234   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:39.745277   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.749708   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:39.749768   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:39.786606   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:39.786629   63427 cri.go:89] found id: ""
	I1009 20:22:39.786637   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:39.786681   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.790981   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:39.791036   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:39.826615   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:39.826635   63427 cri.go:89] found id: ""
	I1009 20:22:39.826642   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:39.826710   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.831189   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:39.831260   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:39.867300   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:39.867320   63427 cri.go:89] found id: ""
	I1009 20:22:39.867327   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:39.867373   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.871552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:39.871606   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:39.905493   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:39.905513   63427 cri.go:89] found id: ""
	I1009 20:22:39.905521   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:39.905565   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.910653   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:39.910704   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:39.952830   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:39.952848   63427 cri.go:89] found id: ""
	I1009 20:22:39.952856   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:39.952901   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.957366   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:39.957434   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:39.993913   63427 cri.go:89] found id: ""
	I1009 20:22:39.993936   63427 logs.go:282] 0 containers: []
	W1009 20:22:39.993943   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:39.993949   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:39.993993   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:40.036654   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.036680   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.036685   63427 cri.go:89] found id: ""
	I1009 20:22:40.036694   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:40.036752   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.041168   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.045050   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:40.045073   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:40.059862   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:40.059890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:40.098698   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:40.098725   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:40.136003   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:40.136028   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:40.192473   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:40.192499   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.228548   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:40.228575   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:40.634922   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:40.634956   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:40.701278   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:40.701313   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:40.813881   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:40.813915   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:40.874590   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:40.874619   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:40.916558   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:40.916585   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:40.959294   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:40.959323   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.997037   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:40.997065   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:43.555901   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:43.555933   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.555941   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.555947   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.555953   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.555957   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.555962   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.555973   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.555982   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.555992   63427 system_pods.go:74] duration metric: took 3.849229039s to wait for pod list to return data ...
	I1009 20:22:43.556003   63427 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:43.558563   63427 default_sa.go:45] found service account: "default"
	I1009 20:22:43.558582   63427 default_sa.go:55] duration metric: took 2.571282ms for default service account to be created ...
	I1009 20:22:43.558590   63427 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:43.563017   63427 system_pods.go:86] 8 kube-system pods found
	I1009 20:22:43.563036   63427 system_pods.go:89] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.563041   63427 system_pods.go:89] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.563045   63427 system_pods.go:89] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.563049   63427 system_pods.go:89] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.563052   63427 system_pods.go:89] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.563056   63427 system_pods.go:89] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.563074   63427 system_pods.go:89] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.563082   63427 system_pods.go:89] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.563091   63427 system_pods.go:126] duration metric: took 4.493122ms to wait for k8s-apps to be running ...
	I1009 20:22:43.563101   63427 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:43.563148   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:43.579410   63427 system_svc.go:56] duration metric: took 16.301009ms WaitForService to wait for kubelet
	I1009 20:22:43.579435   63427 kubeadm.go:582] duration metric: took 4m22.227803615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:43.579456   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:43.582061   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:43.582083   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:43.582095   63427 node_conditions.go:105] duration metric: took 2.633714ms to run NodePressure ...
	I1009 20:22:43.582108   63427 start.go:241] waiting for startup goroutines ...
	I1009 20:22:43.582118   63427 start.go:246] waiting for cluster config update ...
	I1009 20:22:43.582137   63427 start.go:255] writing updated cluster config ...
	I1009 20:22:43.582415   63427 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:43.628249   63427 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:43.630230   63427 out.go:177] * Done! kubectl is now configured to use "no-preload-480205" cluster and "default" namespace by default
	I1009 20:23:45.402502   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:23:45.402618   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:23:45.404210   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:45.404308   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:45.404415   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:45.404554   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:45.404699   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:45.404776   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:45.406561   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:45.406656   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:45.406713   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:45.406832   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:45.406929   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:45.407025   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:45.407132   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:45.407247   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:45.407350   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:45.407466   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:45.407586   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:45.407659   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:45.407756   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:45.407850   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:45.407937   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:45.408016   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:45.408074   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:45.408202   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:45.408335   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:45.408407   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:45.408510   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:45.410040   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:45.410141   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:45.410231   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:45.410330   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:45.410409   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:45.410546   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:23:45.410589   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:23:45.410653   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.410810   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.410872   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411059   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411164   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411367   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411428   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411606   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411674   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411825   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411832   64287 kubeadm.go:310] 
	I1009 20:23:45.411865   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:23:45.411909   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:23:45.411928   64287 kubeadm.go:310] 
	I1009 20:23:45.411974   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:23:45.412018   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:23:45.412138   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:23:45.412155   64287 kubeadm.go:310] 
	I1009 20:23:45.412300   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:23:45.412344   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:23:45.412393   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:23:45.412400   64287 kubeadm.go:310] 
	I1009 20:23:45.412516   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:23:45.412618   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:23:45.412631   64287 kubeadm.go:310] 
	I1009 20:23:45.412764   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:23:45.412885   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:23:45.412996   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:23:45.413059   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:23:45.413078   64287 kubeadm.go:310] 
	W1009 20:23:45.413176   64287 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:23:45.413219   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:23:45.881931   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:23:45.897391   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:23:45.907598   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:23:45.907621   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:23:45.907668   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:23:45.917540   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:23:45.917585   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:23:45.927278   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:23:45.937054   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:23:45.937109   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:23:45.946544   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.956863   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:23:45.956901   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.966184   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:23:45.975335   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:23:45.975385   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:23:45.984552   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:23:46.063271   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:46.063380   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:46.213340   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:46.213511   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:46.213652   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:46.388334   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:46.390196   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:46.390303   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:46.390384   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:46.390499   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:46.390606   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:46.390710   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:46.390799   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:46.390899   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:46.390975   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:46.391097   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:46.391196   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:46.391268   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:46.391355   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:46.513116   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:46.906952   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:47.053715   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:47.184809   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:47.207139   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:47.208338   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:47.208424   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:47.362764   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:47.364703   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:47.364823   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:47.377925   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:47.379842   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:47.380533   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:47.382819   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:24:27.385438   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:24:27.385546   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:27.385726   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:32.386071   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:32.386268   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:42.386802   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:42.386979   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:02.388082   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:02.388300   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.388787   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:42.389021   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.389080   64287 kubeadm.go:310] 
	I1009 20:25:42.389329   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:25:42.389524   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:25:42.389545   64287 kubeadm.go:310] 
	I1009 20:25:42.389625   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:25:42.389680   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:25:42.389832   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:25:42.389846   64287 kubeadm.go:310] 
	I1009 20:25:42.389963   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:25:42.390019   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:25:42.390066   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:25:42.390081   64287 kubeadm.go:310] 
	I1009 20:25:42.390201   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:25:42.390312   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:25:42.390321   64287 kubeadm.go:310] 
	I1009 20:25:42.390438   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:25:42.390550   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:25:42.390671   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:25:42.390779   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:25:42.390791   64287 kubeadm.go:310] 
	I1009 20:25:42.391382   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:25:42.391507   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:25:42.391606   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:25:42.391673   64287 kubeadm.go:394] duration metric: took 7m57.392748571s to StartCluster
	I1009 20:25:42.391719   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:25:42.391785   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:25:42.439581   64287 cri.go:89] found id: ""
	I1009 20:25:42.439610   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.439621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:25:42.439628   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:25:42.439695   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:25:42.476205   64287 cri.go:89] found id: ""
	I1009 20:25:42.476231   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.476238   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:25:42.476243   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:25:42.476297   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:25:42.528317   64287 cri.go:89] found id: ""
	I1009 20:25:42.528342   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.528350   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:25:42.528356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:25:42.528413   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:25:42.564857   64287 cri.go:89] found id: ""
	I1009 20:25:42.564885   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.564893   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:25:42.564899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:25:42.564956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:25:42.600053   64287 cri.go:89] found id: ""
	I1009 20:25:42.600081   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.600088   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:25:42.600094   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:25:42.600146   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:25:42.636997   64287 cri.go:89] found id: ""
	I1009 20:25:42.637026   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.637034   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:25:42.637047   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:25:42.637107   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:25:42.672228   64287 cri.go:89] found id: ""
	I1009 20:25:42.672255   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.672266   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:25:42.672273   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:25:42.672331   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:25:42.711696   64287 cri.go:89] found id: ""
	I1009 20:25:42.711727   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.711737   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:25:42.711749   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:25:42.711764   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:25:42.764839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:25:42.764876   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:25:42.778484   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:25:42.778512   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:25:42.864830   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:25:42.864859   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:25:42.864874   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:25:42.975355   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:25:42.975389   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:25:43.015247   64287 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:25:43.015307   64287 out.go:270] * 
	W1009 20:25:43.015375   64287 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.015392   64287 out.go:270] * 
	W1009 20:25:43.016664   64287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:25:43.020135   64287 out.go:201] 
	W1009 20:25:43.021388   64287 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.021427   64287 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:25:43.021453   64287 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:25:43.022804   64287 out.go:201] 
	
	
	==> CRI-O <==
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.388675652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f206279e-c558-4e01-9ed4-3ba5571ba711 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.388992693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8,PodSandboxId:40f5ac310a4c0ce32cb1eabffc7264a7cf5be7fee8e68838ffc78615f4d700f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505341438040209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f34170-4cef-4daa-ad01-14999b6f1110,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce,PodSandboxId:6665a3fd51c65e54653a4bf7b0b49ec7ba2c915ab6ec08ccd449e3041a8c265b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341435449588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8x9ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e5e8e5-f679-486e-b1a5-69eb7b46d49e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49,PodSandboxId:e7f4ce8dc720cf38b016a2f42947591b6c4af46d3a04f49f2a79ba5c8f656644,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341373386799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6644x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f598059d-a036-45df-885c-95efd04424d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362,PodSandboxId:18e07ae69944d7aadc833f063a484c4395f6304fd5615f749f05e56551bd951f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728505341038628311,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6klwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb78cea4-6c44-4a04-a75b-6ed061c1ecdf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09,PodSandboxId:f0e11d5fdb6e958ab12bce3dc62c91896aa33321b341b21ed18038574aa4c6c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505329988574887,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b97277fc866172dab7127888d1f0d4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81,PodSandboxId:d8abb33ba7d58c299f7f974600d190688b525f97ea99d5c88740178e4bc19300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505329930949666,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4,PodSandboxId:5c17b81e750a10d77bb5ca9e48416c39d72d2946bd409ad82ee3603b5534118c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505329937549070,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b97f68f3fb4328203f939607b95a02d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188,PodSandboxId:ad7d44cd50ef0aed80b550452cb67df8df25a6a6563ff57a3268ee168df3082d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505329914119040,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bbcaffd799afafaf9cf6f5dbf86aa5c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719,PodSandboxId:deb32eb8f9eb4dffb8bf620693aff8e787a3d2addf4f3c90a70642021e38c604,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505042055642680,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f206279e-c558-4e01-9ed4-3ba5571ba711 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.427569762Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b6160f2-625a-453e-8744-abfbda1d810f name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.427658346Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b6160f2-625a-453e-8744-abfbda1d810f name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.428992902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=22cc9854-89c5-4e71-86cf-86614b82bcff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.429379395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505890429358029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=22cc9854-89c5-4e71-86cf-86614b82bcff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.429896473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96a60d9f-0799-485d-b8dc-fca17e196dbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.430038572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96a60d9f-0799-485d-b8dc-fca17e196dbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.430561498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8,PodSandboxId:40f5ac310a4c0ce32cb1eabffc7264a7cf5be7fee8e68838ffc78615f4d700f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505341438040209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f34170-4cef-4daa-ad01-14999b6f1110,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce,PodSandboxId:6665a3fd51c65e54653a4bf7b0b49ec7ba2c915ab6ec08ccd449e3041a8c265b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341435449588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8x9ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e5e8e5-f679-486e-b1a5-69eb7b46d49e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49,PodSandboxId:e7f4ce8dc720cf38b016a2f42947591b6c4af46d3a04f49f2a79ba5c8f656644,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341373386799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6644x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f598059d-a036-45df-885c-95efd04424d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362,PodSandboxId:18e07ae69944d7aadc833f063a484c4395f6304fd5615f749f05e56551bd951f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728505341038628311,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6klwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb78cea4-6c44-4a04-a75b-6ed061c1ecdf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09,PodSandboxId:f0e11d5fdb6e958ab12bce3dc62c91896aa33321b341b21ed18038574aa4c6c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505329988574887,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b97277fc866172dab7127888d1f0d4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81,PodSandboxId:d8abb33ba7d58c299f7f974600d190688b525f97ea99d5c88740178e4bc19300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505329930949666,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4,PodSandboxId:5c17b81e750a10d77bb5ca9e48416c39d72d2946bd409ad82ee3603b5534118c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505329937549070,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b97f68f3fb4328203f939607b95a02d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188,PodSandboxId:ad7d44cd50ef0aed80b550452cb67df8df25a6a6563ff57a3268ee168df3082d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505329914119040,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bbcaffd799afafaf9cf6f5dbf86aa5c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719,PodSandboxId:deb32eb8f9eb4dffb8bf620693aff8e787a3d2addf4f3c90a70642021e38c604,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505042055642680,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96a60d9f-0799-485d-b8dc-fca17e196dbc name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.465377344Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=2ac73b40-b149-40d7-91d4-22281aaf1263 name=/runtime.v1.RuntimeService/Status
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.465467051Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2ac73b40-b149-40d7-91d4-22281aaf1263 name=/runtime.v1.RuntimeService/Status
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.474523538Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bbebe496-c253-4f64-bd51-757895b2c0c9 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.474605574Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bbebe496-c253-4f64-bd51-757895b2c0c9 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.476562683Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f56e28f4-c3b3-4dc4-9ecb-2a3535b4747a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.477095090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505890477070039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f56e28f4-c3b3-4dc4-9ecb-2a3535b4747a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.477565805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=011e8ebd-4cbe-479b-b62a-5cadd3d406e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.477648284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=011e8ebd-4cbe-479b-b62a-5cadd3d406e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.477921158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8,PodSandboxId:40f5ac310a4c0ce32cb1eabffc7264a7cf5be7fee8e68838ffc78615f4d700f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505341438040209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f34170-4cef-4daa-ad01-14999b6f1110,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce,PodSandboxId:6665a3fd51c65e54653a4bf7b0b49ec7ba2c915ab6ec08ccd449e3041a8c265b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341435449588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8x9ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e5e8e5-f679-486e-b1a5-69eb7b46d49e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49,PodSandboxId:e7f4ce8dc720cf38b016a2f42947591b6c4af46d3a04f49f2a79ba5c8f656644,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341373386799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6644x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f598059d-a036-45df-885c-95efd04424d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362,PodSandboxId:18e07ae69944d7aadc833f063a484c4395f6304fd5615f749f05e56551bd951f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728505341038628311,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6klwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb78cea4-6c44-4a04-a75b-6ed061c1ecdf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09,PodSandboxId:f0e11d5fdb6e958ab12bce3dc62c91896aa33321b341b21ed18038574aa4c6c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505329988574887,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b97277fc866172dab7127888d1f0d4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81,PodSandboxId:d8abb33ba7d58c299f7f974600d190688b525f97ea99d5c88740178e4bc19300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505329930949666,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4,PodSandboxId:5c17b81e750a10d77bb5ca9e48416c39d72d2946bd409ad82ee3603b5534118c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505329937549070,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b97f68f3fb4328203f939607b95a02d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188,PodSandboxId:ad7d44cd50ef0aed80b550452cb67df8df25a6a6563ff57a3268ee168df3082d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505329914119040,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bbcaffd799afafaf9cf6f5dbf86aa5c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719,PodSandboxId:deb32eb8f9eb4dffb8bf620693aff8e787a3d2addf4f3c90a70642021e38c604,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505042055642680,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=011e8ebd-4cbe-479b-b62a-5cadd3d406e8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.518056999Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3ad1e1a-620a-4840-80ad-9a3bf1ed64da name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.518152464Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3ad1e1a-620a-4840-80ad-9a3bf1ed64da name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.519063016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa61ea30-9ee3-4ffe-985a-cd1acfee67af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.519447246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505890519426033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa61ea30-9ee3-4ffe-985a-cd1acfee67af name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.519930342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9b5ee06-eaac-44ca-9d1c-9ec1f8725569 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.520011047Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9b5ee06-eaac-44ca-9d1c-9ec1f8725569 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:31:30.520191388Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8,PodSandboxId:40f5ac310a4c0ce32cb1eabffc7264a7cf5be7fee8e68838ffc78615f4d700f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505341438040209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f34170-4cef-4daa-ad01-14999b6f1110,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce,PodSandboxId:6665a3fd51c65e54653a4bf7b0b49ec7ba2c915ab6ec08ccd449e3041a8c265b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341435449588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8x9ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e5e8e5-f679-486e-b1a5-69eb7b46d49e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49,PodSandboxId:e7f4ce8dc720cf38b016a2f42947591b6c4af46d3a04f49f2a79ba5c8f656644,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341373386799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6644x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f598059d-a036-45df-885c-95efd04424d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362,PodSandboxId:18e07ae69944d7aadc833f063a484c4395f6304fd5615f749f05e56551bd951f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728505341038628311,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6klwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb78cea4-6c44-4a04-a75b-6ed061c1ecdf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09,PodSandboxId:f0e11d5fdb6e958ab12bce3dc62c91896aa33321b341b21ed18038574aa4c6c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505329988574887,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b97277fc866172dab7127888d1f0d4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81,PodSandboxId:d8abb33ba7d58c299f7f974600d190688b525f97ea99d5c88740178e4bc19300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505329930949666,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4,PodSandboxId:5c17b81e750a10d77bb5ca9e48416c39d72d2946bd409ad82ee3603b5534118c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505329937549070,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b97f68f3fb4328203f939607b95a02d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188,PodSandboxId:ad7d44cd50ef0aed80b550452cb67df8df25a6a6563ff57a3268ee168df3082d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505329914119040,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bbcaffd799afafaf9cf6f5dbf86aa5c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719,PodSandboxId:deb32eb8f9eb4dffb8bf620693aff8e787a3d2addf4f3c90a70642021e38c604,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505042055642680,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9b5ee06-eaac-44ca-9d1c-9ec1f8725569 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	519150750d160       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   40f5ac310a4c0       storage-provisioner
	1599ceb30116f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   6665a3fd51c65       coredns-7c65d6cfc9-8x9ns
	be8ea22a44eb0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   e7f4ce8dc720c       coredns-7c65d6cfc9-6644x
	1a250c859008a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   18e07ae69944d       kube-proxy-6klwf
	a67302292e06e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   f0e11d5fdb6e9       etcd-default-k8s-diff-port-733270
	b41b34d2a3dcd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   5c17b81e750a1       kube-scheduler-default-k8s-diff-port-733270
	5fc33213e2fd8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   d8abb33ba7d58       kube-apiserver-default-k8s-diff-port-733270
	4cb9d0e572902       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   ad7d44cd50ef0       kube-controller-manager-default-k8s-diff-port-733270
	2419be48ef7ea       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   deb32eb8f9eb4       kube-apiserver-default-k8s-diff-port-733270
	
	
	==> coredns [1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-733270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-733270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=default-k8s-diff-port-733270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T20_22_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-733270
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:31:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:27:31 +0000   Wed, 09 Oct 2024 20:22:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:27:31 +0000   Wed, 09 Oct 2024 20:22:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:27:31 +0000   Wed, 09 Oct 2024 20:22:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:27:31 +0000   Wed, 09 Oct 2024 20:22:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.134
	  Hostname:    default-k8s-diff-port-733270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c47bd50252314253803dbb053fca24c4
	  System UUID:                c47bd502-5231-4253-803d-bb053fca24c4
	  Boot ID:                    c11b6fae-9e1c-4543-9658-1fcfc30a47b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6644x                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 coredns-7c65d6cfc9-8x9ns                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m10s
	  kube-system                 etcd-default-k8s-diff-port-733270                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m15s
	  kube-system                 kube-apiserver-default-k8s-diff-port-733270             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-733270    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-proxy-6klwf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kube-system                 kube-scheduler-default-k8s-diff-port-733270             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 metrics-server-6867b74b74-srjrs                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m9s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m21s (x8 over 9m21s)  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s (x8 over 9m21s)  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s (x7 over 9m21s)  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m15s                  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m15s                  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s                  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m11s                  node-controller  Node default-k8s-diff-port-733270 event: Registered Node default-k8s-diff-port-733270 in Controller
	
	
	==> dmesg <==
	[  +0.041503] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.990112] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.489351] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 9 20:17] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.080715] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.058439] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061166] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.219655] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.117621] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.310828] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.131703] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +2.201156] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.062913] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.531482] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.583706] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.429409] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 9 20:22] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.622050] systemd-fstab-generator[2540]: Ignoring "noauto" option for root device
	[  +4.969307] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.587071] systemd-fstab-generator[2864]: Ignoring "noauto" option for root device
	[  +4.864577] systemd-fstab-generator[2973]: Ignoring "noauto" option for root device
	[  +0.103872] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.522596] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09] <==
	{"level":"info","ts":"2024-10-09T20:22:10.462596Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-09T20:22:10.462883Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b97e97327d189999","initial-advertise-peer-urls":["https://192.168.72.134:2380"],"listen-peer-urls":["https://192.168.72.134:2380"],"advertise-client-urls":["https://192.168.72.134:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.134:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-09T20:22:10.462925Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-09T20:22:10.462994Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.134:2380"}
	{"level":"info","ts":"2024-10-09T20:22:10.463026Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.134:2380"}
	{"level":"info","ts":"2024-10-09T20:22:10.687871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-09T20:22:10.687981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-09T20:22:10.688036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 received MsgPreVoteResp from b97e97327d189999 at term 1"}
	{"level":"info","ts":"2024-10-09T20:22:10.688070Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 became candidate at term 2"}
	{"level":"info","ts":"2024-10-09T20:22:10.688094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 received MsgVoteResp from b97e97327d189999 at term 2"}
	{"level":"info","ts":"2024-10-09T20:22:10.688120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b97e97327d189999 became leader at term 2"}
	{"level":"info","ts":"2024-10-09T20:22:10.688146Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b97e97327d189999 elected leader b97e97327d189999 at term 2"}
	{"level":"info","ts":"2024-10-09T20:22:10.693994Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:22:10.698431Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:22:10.703504Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:22:10.705171Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.134:2379"}
	{"level":"info","ts":"2024-10-09T20:22:10.698314Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b97e97327d189999","local-member-attributes":"{Name:default-k8s-diff-port-733270 ClientURLs:[https://192.168.72.134:2379]}","request-path":"/0/members/b97e97327d189999/attributes","cluster-id":"e05c7f9c7688aa0f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:22:10.705950Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:22:10.706211Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:22:10.713847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:22:10.706407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e05c7f9c7688aa0f","local-member-id":"b97e97327d189999","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:22:10.713980Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:22:10.714028Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:22:10.714525Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:22:10.718735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:31:30 up 14 min,  0 users,  load average: 0.31, 0.20, 0.12
	Linux default-k8s-diff-port-733270 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719] <==
	W1009 20:22:02.203505       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.203612       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.209051       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.230509       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.281046       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.387889       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.395338       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.410904       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.428375       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.475410       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.511455       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.546660       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.559050       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.591432       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.597044       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.613151       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.617975       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.697969       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.704484       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.728362       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.732870       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.821904       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.833495       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.863506       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.885071       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81] <==
	W1009 20:27:13.457958       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:27:13.458063       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:27:13.459012       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:27:13.459236       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:28:13.459335       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:28:13.459540       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1009 20:28:13.459990       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:28:13.460097       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:28:13.461017       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:28:13.462175       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:30:13.462289       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:30:13.462712       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1009 20:30:13.462942       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:30:13.463078       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:30:13.463912       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:30:13.465131       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188] <==
	E1009 20:26:19.402077       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:26:19.844469       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:26:49.409249       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:26:49.852030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:27:19.416129       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:27:19.861590       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:27:31.121031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-733270"
	E1009 20:27:49.422430       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:27:49.869475       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:28:19.430195       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:28:19.878776       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:28:26.247021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="312.905µs"
	I1009 20:28:39.249705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="192.966µs"
	E1009 20:28:49.436919       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:28:49.887611       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:29:19.443902       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:29:19.894234       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:29:49.450647       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:29:49.901332       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:30:19.458048       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:30:19.910184       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:30:49.464984       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:30:49.920071       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:31:19.472276       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:31:19.930104       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:22:21.818439       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:22:21.844092       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.134"]
	E1009 20:22:21.844172       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:22:21.944554       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:22:21.944640       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:22:21.944675       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:22:21.947496       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:22:21.948070       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:22:21.948338       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:22:21.950059       1 config.go:199] "Starting service config controller"
	I1009 20:22:21.950116       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:22:21.950156       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:22:21.950172       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:22:21.951592       1 config.go:328] "Starting node config controller"
	I1009 20:22:21.951645       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:22:22.050680       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 20:22:22.050772       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:22:22.052251       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4] <==
	W1009 20:22:12.514385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:22:12.515775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.337764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:22:13.337880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.347168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 20:22:13.347233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.371153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 20:22:13.371208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.391237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:22:13.391291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.415901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:22:13.415992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.417082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:22:13.417203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.473085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:22:13.473182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.564656       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:22:13.564952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.603480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 20:22:13.603552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.631209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:22:13.631266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.704619       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:22:13.704673       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1009 20:22:16.705095       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:30:20 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:20.230971    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:30:25 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:25.418957    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505825418559681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:25 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:25.419316    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505825418559681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:34 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:34.231100    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:30:35 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:35.421679    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505835421229343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:35 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:35.422009    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505835421229343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:45 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:45.423497    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505845423245844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:45 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:45.423539    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505845423245844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:49 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:49.232842    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:30:55 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:55.427223    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505855426405427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:55 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:30:55.427431    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505855426405427,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:00 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:00.231322    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:31:05 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:05.429553    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505865429158918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:05 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:05.429900    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505865429158918,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:14 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:14.231278    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:31:15 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:15.275664    2871 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 20:31:15 default-k8s-diff-port-733270 kubelet[2871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 20:31:15 default-k8s-diff-port-733270 kubelet[2871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 20:31:15 default-k8s-diff-port-733270 kubelet[2871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 20:31:15 default-k8s-diff-port-733270 kubelet[2871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 20:31:15 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:15.432211    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505875431649800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:15 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:15.432237    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505875431649800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:25 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:25.433427    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505885433021458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:25 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:25.433695    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505885433021458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:26 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:31:26.230907    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	
	
	==> storage-provisioner [519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8] <==
	I1009 20:22:21.730760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:22:21.748389       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:22:21.748551       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:22:21.766392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:22:21.767060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b3da48f-dde7-4ad2-82ca-0315dd56d005", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-733270_82667a2d-f280-4aa8-addc-ccb916c29dc4 became leader
	I1009 20:22:21.769429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733270_82667a2d-f280-4aa8-addc-ccb916c29dc4!
	I1009 20:22:21.870671       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733270_82667a2d-f280-4aa8-addc-ccb916c29dc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-733270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-srjrs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-733270 describe pod metrics-server-6867b74b74-srjrs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-733270 describe pod metrics-server-6867b74b74-srjrs: exit status 1 (58.794247ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-srjrs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-733270 describe pod metrics-server-6867b74b74-srjrs: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1009 20:24:51.613801   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:24:51.909054   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480205 -n no-preload-480205
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-09 20:31:44.160291023 +0000 UTC m=+6315.150063954
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480205 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-480205 logs -n 25: (2.031841901s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-790037                           | kubernetes-upgrade-790037    | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:07 UTC |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-615869 sudo                            | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-615869                                 | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:08 UTC |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-480205             | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-324052 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | disable-driver-mounts-324052                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:10 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503330            | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC | 09 Oct 24 20:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-733270  | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-480205                  | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169021        | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-503330                 | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-733270       | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:22 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169021             | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:13:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:13:44.614940   64287 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:13:44.615052   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615076   64287 out.go:358] Setting ErrFile to fd 2...
	I1009 20:13:44.615081   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615239   64287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:13:44.615728   64287 out.go:352] Setting JSON to false
	I1009 20:13:44.616598   64287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6966,"bootTime":1728497859,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:13:44.616678   64287 start.go:139] virtualization: kvm guest
	I1009 20:13:44.618709   64287 out.go:177] * [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:13:44.619813   64287 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:13:44.619841   64287 notify.go:220] Checking for updates...
	I1009 20:13:44.621876   64287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:13:44.623226   64287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:13:44.624576   64287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:13:44.625863   64287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:13:44.627027   64287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:13:44.628559   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:13:44.628948   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.629014   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.644138   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I1009 20:13:44.644537   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.645045   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.645067   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.645380   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.645557   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.647115   64287 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 20:13:44.648228   64287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:13:44.648491   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.648529   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.663211   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I1009 20:13:44.663674   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.664164   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.664192   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.664482   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.664648   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.697395   64287 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:13:44.698580   64287 start.go:297] selected driver: kvm2
	I1009 20:13:44.698591   64287 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.698719   64287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:13:44.699437   64287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.699521   64287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:13:44.713190   64287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:13:44.713567   64287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:13:44.713600   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:13:44.713640   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:13:44.713673   64287 start.go:340] cluster config:
	{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.713805   64287 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.716209   64287 out.go:177] * Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	I1009 20:13:44.717364   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:13:44.717399   64287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:13:44.717409   64287 cache.go:56] Caching tarball of preloaded images
	I1009 20:13:44.717485   64287 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:13:44.717495   64287 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:13:44.717594   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:13:44.717753   64287 start.go:360] acquireMachinesLock for old-k8s-version-169021: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:13:48.943307   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:52.015296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:58.095330   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:01.167322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:07.247325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:10.323296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:16.399318   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:19.471371   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:25.551279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:28.623322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:34.703301   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:37.775281   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:43.855344   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:46.927300   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:53.007389   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:56.079332   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:02.159290   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:05.231351   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:11.311339   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:14.383289   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:20.463287   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:23.535402   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:29.615312   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:32.687319   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:38.767323   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:41.839306   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:47.919325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:50.991292   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:57.071390   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:00.143404   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:06.223291   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:09.295298   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:15.375349   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:18.447271   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:24.527327   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:27.599279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:30.604005   63744 start.go:364] duration metric: took 3m52.142985964s to acquireMachinesLock for "embed-certs-503330"
	I1009 20:16:30.604068   63744 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:30.604076   63744 fix.go:54] fixHost starting: 
	I1009 20:16:30.604520   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:30.604571   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:30.620743   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I1009 20:16:30.621433   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:30.621936   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:16:30.621961   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:30.622323   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:30.622490   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:30.622654   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:16:30.624257   63744 fix.go:112] recreateIfNeeded on embed-certs-503330: state=Stopped err=<nil>
	I1009 20:16:30.624295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	W1009 20:16:30.624542   63744 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:30.627103   63744 out.go:177] * Restarting existing kvm2 VM for "embed-certs-503330" ...
	I1009 20:16:30.601719   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:30.601759   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602048   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:16:30.602078   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602263   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:16:30.603862   63427 machine.go:96] duration metric: took 4m37.428982059s to provisionDockerMachine
	I1009 20:16:30.603905   63427 fix.go:56] duration metric: took 4m37.449834405s for fixHost
	I1009 20:16:30.603915   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 4m37.449856097s
	W1009 20:16:30.603942   63427 start.go:714] error starting host: provision: host is not running
	W1009 20:16:30.604043   63427 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1009 20:16:30.604052   63427 start.go:729] Will try again in 5 seconds ...
	I1009 20:16:30.628558   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Start
	I1009 20:16:30.628718   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring networks are active...
	I1009 20:16:30.629440   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network default is active
	I1009 20:16:30.629760   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network mk-embed-certs-503330 is active
	I1009 20:16:30.630197   63744 main.go:141] libmachine: (embed-certs-503330) Getting domain xml...
	I1009 20:16:30.630952   63744 main.go:141] libmachine: (embed-certs-503330) Creating domain...
	I1009 20:16:31.808982   63744 main.go:141] libmachine: (embed-certs-503330) Waiting to get IP...
	I1009 20:16:31.809856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:31.810317   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:31.810463   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:31.810307   64895 retry.go:31] will retry after 287.246953ms: waiting for machine to come up
	I1009 20:16:32.098815   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.099474   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.099513   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.099422   64895 retry.go:31] will retry after 323.155152ms: waiting for machine to come up
	I1009 20:16:32.424145   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.424618   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.424646   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.424576   64895 retry.go:31] will retry after 410.947245ms: waiting for machine to come up
	I1009 20:16:32.837351   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.837773   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.837823   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.837735   64895 retry.go:31] will retry after 562.56411ms: waiting for machine to come up
	I1009 20:16:35.605597   63427 start.go:360] acquireMachinesLock for no-preload-480205: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:16:33.401377   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.401828   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.401877   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.401781   64895 retry.go:31] will retry after 460.104327ms: waiting for machine to come up
	I1009 20:16:33.863457   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.863854   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.863880   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.863815   64895 retry.go:31] will retry after 668.516186ms: waiting for machine to come up
	I1009 20:16:34.533619   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:34.534019   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:34.534054   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:34.533954   64895 retry.go:31] will retry after 966.757544ms: waiting for machine to come up
	I1009 20:16:35.501805   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:35.502178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:35.502200   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:35.502137   64895 retry.go:31] will retry after 1.017669155s: waiting for machine to come up
	I1009 20:16:36.521729   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:36.522150   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:36.522178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:36.522115   64895 retry.go:31] will retry after 1.292799206s: waiting for machine to come up
	I1009 20:16:37.816782   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:37.817187   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:37.817207   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:37.817156   64895 retry.go:31] will retry after 2.202935241s: waiting for machine to come up
	I1009 20:16:40.022666   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:40.023072   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:40.023101   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:40.023030   64895 retry.go:31] will retry after 2.360885318s: waiting for machine to come up
	I1009 20:16:42.385530   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:42.385947   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:42.385976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:42.385909   64895 retry.go:31] will retry after 2.1999082s: waiting for machine to come up
	I1009 20:16:44.588258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:44.588617   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:44.588649   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:44.588581   64895 retry.go:31] will retry after 3.345984614s: waiting for machine to come up
	I1009 20:16:47.937287   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937758   63744 main.go:141] libmachine: (embed-certs-503330) Found IP for machine: 192.168.50.97
	I1009 20:16:47.937785   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has current primary IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937790   63744 main.go:141] libmachine: (embed-certs-503330) Reserving static IP address...
	I1009 20:16:47.938195   63744 main.go:141] libmachine: (embed-certs-503330) Reserved static IP address: 192.168.50.97
	I1009 20:16:47.938231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.938241   63744 main.go:141] libmachine: (embed-certs-503330) Waiting for SSH to be available...
	I1009 20:16:47.938266   63744 main.go:141] libmachine: (embed-certs-503330) DBG | skip adding static IP to network mk-embed-certs-503330 - found existing host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"}
	I1009 20:16:47.938279   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Getting to WaitForSSH function...
	I1009 20:16:47.940214   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940468   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.940499   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940570   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH client type: external
	I1009 20:16:47.940605   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa (-rw-------)
	I1009 20:16:47.940639   63744 main.go:141] libmachine: (embed-certs-503330) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:16:47.940654   63744 main.go:141] libmachine: (embed-certs-503330) DBG | About to run SSH command:
	I1009 20:16:47.940660   63744 main.go:141] libmachine: (embed-certs-503330) DBG | exit 0
	I1009 20:16:48.066973   63744 main.go:141] libmachine: (embed-certs-503330) DBG | SSH cmd err, output: <nil>: 
	I1009 20:16:48.067404   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetConfigRaw
	I1009 20:16:48.068009   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.070587   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.070969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.070998   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.071241   63744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/config.json ...
	I1009 20:16:48.071426   63744 machine.go:93] provisionDockerMachine start ...
	I1009 20:16:48.071443   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:48.071655   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.074102   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.074448   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074560   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.074721   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074872   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074989   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.075156   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.075346   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.075358   63744 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:16:48.187275   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:16:48.187302   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187600   63744 buildroot.go:166] provisioning hostname "embed-certs-503330"
	I1009 20:16:48.187624   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187763   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.190220   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190585   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.190606   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190736   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.190932   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191110   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191251   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.191400   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.191608   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.191629   63744 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-503330 && echo "embed-certs-503330" | sudo tee /etc/hostname
	I1009 20:16:48.321932   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-503330
	
	I1009 20:16:48.321961   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.324976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.325393   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325542   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.325720   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.325856   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.326024   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.326360   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.326546   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.326570   63744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-503330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-503330/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-503330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:16:49.299713   64109 start.go:364] duration metric: took 3m11.699715872s to acquireMachinesLock for "default-k8s-diff-port-733270"
	I1009 20:16:49.299779   64109 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:49.299788   64109 fix.go:54] fixHost starting: 
	I1009 20:16:49.300158   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:49.300205   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:49.319769   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1009 20:16:49.320201   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:49.320678   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:16:49.320704   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:49.321107   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:49.321301   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:16:49.321463   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:16:49.322908   64109 fix.go:112] recreateIfNeeded on default-k8s-diff-port-733270: state=Stopped err=<nil>
	I1009 20:16:49.322943   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	W1009 20:16:49.323098   64109 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:49.324952   64109 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-733270" ...
	I1009 20:16:48.448176   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:48.448210   63744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:16:48.448243   63744 buildroot.go:174] setting up certificates
	I1009 20:16:48.448254   63744 provision.go:84] configureAuth start
	I1009 20:16:48.448267   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.448531   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.450984   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451384   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.451422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451479   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.453759   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454080   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.454106   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454202   63744 provision.go:143] copyHostCerts
	I1009 20:16:48.454273   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:16:48.454283   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:16:48.454362   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:16:48.454505   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:16:48.454517   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:16:48.454565   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:16:48.454650   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:16:48.454660   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:16:48.454696   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:16:48.454767   63744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.embed-certs-503330 san=[127.0.0.1 192.168.50.97 embed-certs-503330 localhost minikube]
	I1009 20:16:48.669251   63744 provision.go:177] copyRemoteCerts
	I1009 20:16:48.669335   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:16:48.669373   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.671969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.672258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.672629   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.672739   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.672856   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:48.756869   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:16:48.781853   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:16:48.805746   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:16:48.828729   63744 provision.go:87] duration metric: took 380.461988ms to configureAuth
	I1009 20:16:48.828774   63744 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:16:48.828972   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:16:48.829053   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.831590   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.831874   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.831896   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.832085   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.832273   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832411   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832545   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.832664   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.832906   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.832928   63744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:16:49.057643   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:16:49.057673   63744 machine.go:96] duration metric: took 986.233627ms to provisionDockerMachine
	I1009 20:16:49.057686   63744 start.go:293] postStartSetup for "embed-certs-503330" (driver="kvm2")
	I1009 20:16:49.057697   63744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:16:49.057713   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.057985   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:16:49.058013   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.060943   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061314   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.061336   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061544   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.061732   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.061891   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.062024   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.145757   63744 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:16:49.150378   63744 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:16:49.150407   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:16:49.150486   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:16:49.150589   63744 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:16:49.150697   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:16:49.160318   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:49.184297   63744 start.go:296] duration metric: took 126.596407ms for postStartSetup
	I1009 20:16:49.184337   63744 fix.go:56] duration metric: took 18.580262238s for fixHost
	I1009 20:16:49.184374   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.186720   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187020   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.187043   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187243   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.187435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187571   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187689   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.187812   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:49.187993   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:49.188005   63744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:16:49.299573   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505009.274901835
	
	I1009 20:16:49.299591   63744 fix.go:216] guest clock: 1728505009.274901835
	I1009 20:16:49.299610   63744 fix.go:229] Guest: 2024-10-09 20:16:49.274901835 +0000 UTC Remote: 2024-10-09 20:16:49.184353734 +0000 UTC m=+250.856887553 (delta=90.548101ms)
	I1009 20:16:49.299639   63744 fix.go:200] guest clock delta is within tolerance: 90.548101ms
	I1009 20:16:49.299644   63744 start.go:83] releasing machines lock for "embed-certs-503330", held for 18.695596427s
	I1009 20:16:49.299671   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.299949   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:49.302951   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303308   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.303337   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303494   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.303952   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304100   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304164   63744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:16:49.304213   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.304273   63744 ssh_runner.go:195] Run: cat /version.json
	I1009 20:16:49.304295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.306543   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306817   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.306856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306901   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307010   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307196   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307365   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.307387   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.307404   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307518   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.307612   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307778   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307974   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.308128   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.410624   63744 ssh_runner.go:195] Run: systemctl --version
	I1009 20:16:49.418412   63744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:16:49.567318   63744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:16:49.573238   63744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:16:49.573326   63744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:16:49.589269   63744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:16:49.589292   63744 start.go:495] detecting cgroup driver to use...
	I1009 20:16:49.589361   63744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:16:49.606654   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:16:49.621200   63744 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:16:49.621253   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:16:49.635346   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:16:49.649294   63744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:16:49.764096   63744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:16:49.892568   63744 docker.go:233] disabling docker service ...
	I1009 20:16:49.892650   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:16:49.907527   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:16:49.920395   63744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:16:50.067177   63744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:16:50.222407   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:16:50.236968   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:16:50.257005   63744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:16:50.257058   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.269955   63744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:16:50.270011   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.282633   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.296259   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.307683   63744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:16:50.320174   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.331518   63744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.350124   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.361327   63744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:16:50.371637   63744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:16:50.371707   63744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:16:50.385652   63744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:16:50.395762   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:50.521257   63744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:16:50.631377   63744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:16:50.631447   63744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:16:50.636594   63744 start.go:563] Will wait 60s for crictl version
	I1009 20:16:50.636643   63744 ssh_runner.go:195] Run: which crictl
	I1009 20:16:50.640677   63744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:16:50.693612   63744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:16:50.693695   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.724735   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.755820   63744 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:16:49.326372   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Start
	I1009 20:16:49.326507   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring networks are active...
	I1009 20:16:49.327206   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network default is active
	I1009 20:16:49.327553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network mk-default-k8s-diff-port-733270 is active
	I1009 20:16:49.327882   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Getting domain xml...
	I1009 20:16:49.328531   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Creating domain...
	I1009 20:16:50.594895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting to get IP...
	I1009 20:16:50.595715   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596086   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596183   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.596074   65019 retry.go:31] will retry after 205.766462ms: waiting for machine to come up
	I1009 20:16:50.803483   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.803974   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.804004   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.803914   65019 retry.go:31] will retry after 357.132949ms: waiting for machine to come up
	I1009 20:16:51.162582   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163122   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163163   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.163072   65019 retry.go:31] will retry after 316.280977ms: waiting for machine to come up
	I1009 20:16:51.480560   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481080   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481107   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.481029   65019 retry.go:31] will retry after 498.455228ms: waiting for machine to come up
	I1009 20:16:51.980618   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981136   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981165   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.981099   65019 retry.go:31] will retry after 595.314117ms: waiting for machine to come up
	I1009 20:16:50.757146   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:50.759889   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760334   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:50.760365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760613   63744 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 20:16:50.764810   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:50.777746   63744 kubeadm.go:883] updating cluster {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:16:50.777862   63744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:16:50.777926   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:50.816658   63744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:16:50.816722   63744 ssh_runner.go:195] Run: which lz4
	I1009 20:16:50.820880   63744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:16:50.825586   63744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:16:50.825614   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:16:52.206757   63744 crio.go:462] duration metric: took 1.385906608s to copy over tarball
	I1009 20:16:52.206837   63744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:16:52.577801   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578322   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578346   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:52.578269   65019 retry.go:31] will retry after 872.123349ms: waiting for machine to come up
	I1009 20:16:53.452602   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453038   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453068   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:53.452984   65019 retry.go:31] will retry after 727.985471ms: waiting for machine to come up
	I1009 20:16:54.182823   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:54.183181   65019 retry.go:31] will retry after 1.366580369s: waiting for machine to come up
	I1009 20:16:55.551983   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552452   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:55.552365   65019 retry.go:31] will retry after 1.327634108s: waiting for machine to come up
	I1009 20:16:56.881693   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882111   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882143   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:56.882061   65019 retry.go:31] will retry after 1.817770667s: waiting for machine to come up
	I1009 20:16:54.208830   63744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.001963207s)
	I1009 20:16:54.208858   63744 crio.go:469] duration metric: took 2.002072256s to extract the tarball
	I1009 20:16:54.208866   63744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:16:54.244727   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:54.287243   63744 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:16:54.287271   63744 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:16:54.287280   63744 kubeadm.go:934] updating node { 192.168.50.97 8443 v1.31.1 crio true true} ...
	I1009 20:16:54.287407   63744 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-503330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:16:54.287496   63744 ssh_runner.go:195] Run: crio config
	I1009 20:16:54.335950   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:16:54.335972   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:16:54.335992   63744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:16:54.336018   63744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.97 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-503330 NodeName:embed-certs-503330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:16:54.336171   63744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-503330"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:16:54.336230   63744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:16:54.346657   63744 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:16:54.346730   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:16:54.356150   63744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:16:54.372246   63744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:16:54.388168   63744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1009 20:16:54.404739   63744 ssh_runner.go:195] Run: grep 192.168.50.97	control-plane.minikube.internal$ /etc/hosts
	I1009 20:16:54.408599   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:54.421033   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:54.554324   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:54.571469   63744 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330 for IP: 192.168.50.97
	I1009 20:16:54.571493   63744 certs.go:194] generating shared ca certs ...
	I1009 20:16:54.571514   63744 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:54.571702   63744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:16:54.571755   63744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:16:54.571768   63744 certs.go:256] generating profile certs ...
	I1009 20:16:54.571890   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/client.key
	I1009 20:16:54.571977   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key.3496edbe
	I1009 20:16:54.572035   63744 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key
	I1009 20:16:54.572172   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:16:54.572212   63744 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:16:54.572225   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:16:54.572263   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:16:54.572295   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:16:54.572339   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:16:54.572395   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:54.573111   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:16:54.613670   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:16:54.647116   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:16:54.683687   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:16:54.722221   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:16:54.759929   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:16:54.787802   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:16:54.810019   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:16:54.832805   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:16:54.854772   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:16:54.878414   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:16:54.901850   63744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:16:54.918260   63744 ssh_runner.go:195] Run: openssl version
	I1009 20:16:54.923815   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:16:54.934350   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938733   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938799   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.944372   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:16:54.954950   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:16:54.965726   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970021   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970081   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.975568   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:16:54.986392   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:16:54.996852   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001051   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001096   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.006579   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:16:55.017264   63744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:16:55.021893   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:16:55.027729   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:16:55.033714   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:16:55.039641   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:16:55.045236   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:16:55.050855   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:16:55.056748   63744 kubeadm.go:392] StartCluster: {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:55.056833   63744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:16:55.056882   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.098936   63744 cri.go:89] found id: ""
	I1009 20:16:55.099014   63744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:16:55.109556   63744 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:16:55.109579   63744 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:16:55.109625   63744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:16:55.119379   63744 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:16:55.120348   63744 kubeconfig.go:125] found "embed-certs-503330" server: "https://192.168.50.97:8443"
	I1009 20:16:55.122330   63744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:16:55.131900   63744 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.97
	I1009 20:16:55.131927   63744 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:16:55.131936   63744 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:16:55.131978   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.171019   63744 cri.go:89] found id: ""
	I1009 20:16:55.171090   63744 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:16:55.188501   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:16:55.198221   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:16:55.198244   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:16:55.198304   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:16:55.207327   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:16:55.207371   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:16:55.216598   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:16:55.226558   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:16:55.226618   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:16:55.237485   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.246557   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:16:55.246604   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.257542   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:16:55.267040   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:16:55.267116   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:16:55.276472   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:16:55.285774   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:55.402155   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.327441   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.559638   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.623281   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.682538   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:16:56.682638   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.183012   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.682740   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.183107   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.702309   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702787   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702821   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:58.702713   65019 retry.go:31] will retry after 1.927245136s: waiting for machine to come up
	I1009 20:17:00.631448   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631884   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:00.631828   65019 retry.go:31] will retry after 2.288888745s: waiting for machine to come up
	I1009 20:16:58.683664   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.717388   63744 api_server.go:72] duration metric: took 2.034851204s to wait for apiserver process to appear ...
	I1009 20:16:58.717417   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:16:58.717441   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:16:58.717988   63744 api_server.go:269] stopped: https://192.168.50.97:8443/healthz: Get "https://192.168.50.97:8443/healthz": dial tcp 192.168.50.97:8443: connect: connection refused
	I1009 20:16:59.217777   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.473119   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.473153   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.473179   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.549848   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.549880   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.718137   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.722540   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:01.722571   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.217856   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.222606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:02.222638   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.718198   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.723729   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:17:02.729552   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:02.729582   63744 api_server.go:131] duration metric: took 4.01215752s to wait for apiserver health ...
	I1009 20:17:02.729594   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:17:02.729603   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:02.731426   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:02.732669   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:02.743408   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:02.762443   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:02.774604   63744 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:02.774647   63744 system_pods.go:61] "coredns-7c65d6cfc9-df57g" [6d86b5f4-6ab2-4313-9247-f2766bb2cd17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:02.774666   63744 system_pods.go:61] "etcd-embed-certs-503330" [c3d2f07e-3ea7-41ae-9247-0c79e5aeef7f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:02.774685   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [917f81d6-e4fb-41fe-8051-a1c645e35af8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:02.774693   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [d12d9ad5-e80a-4745-ae2d-3f24965de4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:02.774706   63744 system_pods.go:61] "kube-proxy-dsh65" [f027d12a-f0b8-45a9-a73d-1afdd80ef7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:17:02.774718   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [a42cdb71-099c-40a3-b474-ced8659ae391] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:02.774736   63744 system_pods.go:61] "metrics-server-6867b74b74-6z7jj" [58aa0ad3-3210-4722-a579-392688c91bae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:02.774752   63744 system_pods.go:61] "storage-provisioner" [3b0ab765-5bd6-44ac-866e-1c1168ad8ed9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:02.774765   63744 system_pods.go:74] duration metric: took 12.298201ms to wait for pod list to return data ...
	I1009 20:17:02.774777   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:02.785857   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:02.785882   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:02.785892   63744 node_conditions.go:105] duration metric: took 11.107216ms to run NodePressure ...
	I1009 20:17:02.785910   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:03.147197   63744 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150727   63744 kubeadm.go:739] kubelet initialised
	I1009 20:17:03.150746   63744 kubeadm.go:740] duration metric: took 3.5247ms waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150753   63744 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:03.155171   63744 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.160022   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160045   63744 pod_ready.go:82] duration metric: took 4.856483ms for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.160053   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160059   63744 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.165155   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165176   63744 pod_ready.go:82] duration metric: took 5.104415ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.165184   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165190   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.170669   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170684   63744 pod_ready.go:82] duration metric: took 5.48497ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.170691   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170697   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.175025   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175039   63744 pod_ready.go:82] duration metric: took 4.333372ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.175047   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175052   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:02.923370   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923752   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923780   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:02.923727   65019 retry.go:31] will retry after 2.87724378s: waiting for machine to come up
	I1009 20:17:05.803251   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803748   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803774   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:05.803698   65019 retry.go:31] will retry after 5.592307609s: waiting for machine to come up
	I1009 20:17:03.565676   63744 pod_ready.go:93] pod "kube-proxy-dsh65" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:03.565703   63744 pod_ready.go:82] duration metric: took 390.643175ms for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.565715   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:05.574374   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:08.072406   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:11.397365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397813   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Found IP for machine: 192.168.72.134
	I1009 20:17:11.397834   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has current primary IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397840   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserving static IP address...
	I1009 20:17:11.398220   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.398246   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | skip adding static IP to network mk-default-k8s-diff-port-733270 - found existing host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"}
	I1009 20:17:11.398259   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserved static IP address: 192.168.72.134
	I1009 20:17:11.398274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for SSH to be available...
	I1009 20:17:11.398291   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Getting to WaitForSSH function...
	I1009 20:17:11.400217   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400530   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.400553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400649   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH client type: external
	I1009 20:17:11.400675   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa (-rw-------)
	I1009 20:17:11.400710   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:11.400729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | About to run SSH command:
	I1009 20:17:11.400744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | exit 0
	I1009 20:17:11.526822   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:11.527202   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetConfigRaw
	I1009 20:17:11.527838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.530365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530702   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.530729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530978   64109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/config.json ...
	I1009 20:17:11.531187   64109 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:11.531204   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:11.531388   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.533307   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533646   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.533671   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533778   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.533949   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534088   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534181   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.534308   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.534521   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.534535   64109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:11.643315   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:11.643341   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643558   64109 buildroot.go:166] provisioning hostname "default-k8s-diff-port-733270"
	I1009 20:17:11.643580   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643746   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.646369   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646741   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.646771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646919   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.647087   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647249   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647363   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.647495   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.647698   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.647723   64109 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733270 && echo "default-k8s-diff-port-733270" | sudo tee /etc/hostname
	I1009 20:17:11.774094   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733270
	
	I1009 20:17:11.774129   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.776945   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.777318   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777450   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.777637   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777807   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777942   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.778077   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.778265   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.778290   64109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:11.899636   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:11.899666   64109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:11.899712   64109 buildroot.go:174] setting up certificates
	I1009 20:17:11.899729   64109 provision.go:84] configureAuth start
	I1009 20:17:11.899745   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.900007   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.902313   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902620   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.902647   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902783   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.904665   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.904999   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.905028   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.905121   64109 provision.go:143] copyHostCerts
	I1009 20:17:11.905194   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:11.905208   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:11.905274   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:11.905389   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:11.905403   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:11.905433   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:11.905506   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:11.905515   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:11.905543   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:11.905658   64109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733270 san=[127.0.0.1 192.168.72.134 default-k8s-diff-port-733270 localhost minikube]
	I1009 20:17:12.089469   64109 provision.go:177] copyRemoteCerts
	I1009 20:17:12.089537   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:12.089563   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.091929   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092210   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.092242   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092431   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.092601   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.092729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.092822   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.177787   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 20:17:12.201400   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:17:12.225416   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:12.247777   64109 provision.go:87] duration metric: took 348.034794ms to configureAuth
	I1009 20:17:12.247801   64109 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:12.247989   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:12.248077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.250489   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.250849   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.250880   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.251083   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.251281   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251515   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.251786   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.251973   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.251995   64109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:12.475656   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:12.475687   64109 machine.go:96] duration metric: took 944.487945ms to provisionDockerMachine
	I1009 20:17:12.475701   64109 start.go:293] postStartSetup for "default-k8s-diff-port-733270" (driver="kvm2")
	I1009 20:17:12.475714   64109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:12.475730   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.476033   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:12.476070   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.478464   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478809   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.478838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.479077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.479198   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.479330   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.719812   64287 start.go:364] duration metric: took 3m28.002029987s to acquireMachinesLock for "old-k8s-version-169021"
	I1009 20:17:12.719868   64287 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:12.719874   64287 fix.go:54] fixHost starting: 
	I1009 20:17:12.720288   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:12.720338   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:12.736888   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I1009 20:17:12.737330   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:12.737796   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:17:12.737818   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:12.738095   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:12.738284   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:12.738407   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetState
	I1009 20:17:12.740019   64287 fix.go:112] recreateIfNeeded on old-k8s-version-169021: state=Stopped err=<nil>
	I1009 20:17:12.740056   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	W1009 20:17:12.740218   64287 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:12.741971   64287 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-169021" ...
	I1009 20:17:10.572038   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:13.072273   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:12.566216   64109 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:12.570733   64109 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:12.570754   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:12.570811   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:12.570894   64109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:12.571002   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:12.580485   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:12.604494   64109 start.go:296] duration metric: took 128.779636ms for postStartSetup
	I1009 20:17:12.604528   64109 fix.go:56] duration metric: took 23.304740697s for fixHost
	I1009 20:17:12.604547   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.607253   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607579   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.607611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607762   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.607941   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608085   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608190   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.608315   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.608524   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.608542   64109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:12.719641   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505032.674262019
	
	I1009 20:17:12.719663   64109 fix.go:216] guest clock: 1728505032.674262019
	I1009 20:17:12.719672   64109 fix.go:229] Guest: 2024-10-09 20:17:12.674262019 +0000 UTC Remote: 2024-10-09 20:17:12.604532015 +0000 UTC m=+215.141542026 (delta=69.730004ms)
	I1009 20:17:12.719734   64109 fix.go:200] guest clock delta is within tolerance: 69.730004ms
	I1009 20:17:12.719742   64109 start.go:83] releasing machines lock for "default-k8s-diff-port-733270", held for 23.419984544s
	I1009 20:17:12.719771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.720009   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:12.722908   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.723308   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723449   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724041   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724196   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724276   64109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:12.724314   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.724356   64109 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:12.724376   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.726747   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727051   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727098   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727176   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727264   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727555   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.727586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727622   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727681   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.727738   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727865   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727993   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.728110   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.808408   64109 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:12.835630   64109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:12.989949   64109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:12.995824   64109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:12.995893   64109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:13.011680   64109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:13.011707   64109 start.go:495] detecting cgroup driver to use...
	I1009 20:17:13.011774   64109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:13.027110   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:13.040097   64109 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:13.040198   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:13.054001   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:13.068380   64109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:13.190626   64109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:13.367857   64109 docker.go:233] disabling docker service ...
	I1009 20:17:13.367921   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:13.385929   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:13.403253   64109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:13.528117   64109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:13.663611   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:13.679242   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:13.699707   64109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:13.699775   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.710685   64109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:13.710749   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.722116   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.732987   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.744601   64109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:13.755998   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.768759   64109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.788295   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.798784   64109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:13.808745   64109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:13.808810   64109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:13.823798   64109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:13.834854   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:13.959977   64109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:14.071531   64109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:14.071613   64109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:14.077348   64109 start.go:563] Will wait 60s for crictl version
	I1009 20:17:14.077412   64109 ssh_runner.go:195] Run: which crictl
	I1009 20:17:14.081272   64109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:14.120851   64109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:14.120951   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.148588   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.178661   64109 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:12.743057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .Start
	I1009 20:17:12.743249   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring networks are active...
	I1009 20:17:12.743940   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network default is active
	I1009 20:17:12.744263   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network mk-old-k8s-version-169021 is active
	I1009 20:17:12.744639   64287 main.go:141] libmachine: (old-k8s-version-169021) Getting domain xml...
	I1009 20:17:12.745331   64287 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:17:14.013679   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting to get IP...
	I1009 20:17:14.014647   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.015019   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.015101   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.015007   65185 retry.go:31] will retry after 236.047931ms: waiting for machine to come up
	I1009 20:17:14.252239   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.252610   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.252636   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.252568   65185 retry.go:31] will retry after 325.864911ms: waiting for machine to come up
	I1009 20:17:14.580315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.580940   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.580965   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.580878   65185 retry.go:31] will retry after 366.421043ms: waiting for machine to come up
	I1009 20:17:14.179897   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:14.183174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183497   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:14.183529   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183702   64109 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:14.187948   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:14.201218   64109 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:14.201341   64109 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:14.201381   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:14.237137   64109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:14.237210   64109 ssh_runner.go:195] Run: which lz4
	I1009 20:17:14.241492   64109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:14.246237   64109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:14.246270   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:17:15.633127   64109 crio.go:462] duration metric: took 1.391666515s to copy over tarball
	I1009 20:17:15.633221   64109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:15.073427   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.085878   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.574480   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:17.574502   63744 pod_ready.go:82] duration metric: took 14.00878017s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:17.574511   63744 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:14.949258   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.949766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.949800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.949726   65185 retry.go:31] will retry after 498.276481ms: waiting for machine to come up
	I1009 20:17:15.450160   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:15.450601   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:15.450635   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:15.450548   65185 retry.go:31] will retry after 742.118922ms: waiting for machine to come up
	I1009 20:17:16.194707   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.195193   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.195232   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.195137   65185 retry.go:31] will retry after 583.713263ms: waiting for machine to come up
	I1009 20:17:16.780844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.781277   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.781302   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.781215   65185 retry.go:31] will retry after 936.435146ms: waiting for machine to come up
	I1009 20:17:17.719083   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:17.719558   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:17.719588   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:17.719503   65185 retry.go:31] will retry after 1.046822117s: waiting for machine to come up
	I1009 20:17:18.768306   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:18.768844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:18.768872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:18.768798   65185 retry.go:31] will retry after 1.362599959s: waiting for machine to come up
	I1009 20:17:17.738682   64109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10542583s)
	I1009 20:17:17.738724   64109 crio.go:469] duration metric: took 2.105568099s to extract the tarball
	I1009 20:17:17.738733   64109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:17.779611   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:17.834267   64109 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:17:17.834291   64109 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:17:17.834299   64109 kubeadm.go:934] updating node { 192.168.72.134 8444 v1.31.1 crio true true} ...
	I1009 20:17:17.834384   64109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-733270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:17.834449   64109 ssh_runner.go:195] Run: crio config
	I1009 20:17:17.879236   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:17.879265   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:17.879286   64109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:17.879306   64109 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733270 NodeName:default-k8s-diff-port-733270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:17:17.879467   64109 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:17.879531   64109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:17:17.889847   64109 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:17.889945   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:17.899292   64109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1009 20:17:17.915656   64109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:17.931802   64109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1009 20:17:17.949046   64109 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:17.953042   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:17.966741   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:18.099697   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:18.120535   64109 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270 for IP: 192.168.72.134
	I1009 20:17:18.120555   64109 certs.go:194] generating shared ca certs ...
	I1009 20:17:18.120570   64109 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:18.120700   64109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:18.120734   64109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:18.120743   64109 certs.go:256] generating profile certs ...
	I1009 20:17:18.120813   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.key
	I1009 20:17:18.120867   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key.a935be89
	I1009 20:17:18.120910   64109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key
	I1009 20:17:18.121023   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:18.121053   64109 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:18.121065   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:18.121107   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:18.121131   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:18.121165   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:18.121217   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:18.121886   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:18.185147   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:18.221038   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:18.252242   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:18.295828   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 20:17:18.323898   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:18.348575   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:18.372580   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:18.396351   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:18.420726   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:18.444717   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:18.469594   64109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:18.485908   64109 ssh_runner.go:195] Run: openssl version
	I1009 20:17:18.492283   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:18.503167   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507900   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507952   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.513847   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:18.524101   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:18.534793   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539332   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539410   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.545077   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:18.555669   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:18.570727   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576515   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576585   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.582738   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:18.593855   64109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:18.598553   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:18.604755   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:18.611554   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:18.617835   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:18.623671   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:18.629288   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:18.634887   64109 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:18.634994   64109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:18.635040   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.676211   64109 cri.go:89] found id: ""
	I1009 20:17:18.676309   64109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:18.686685   64109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:18.686706   64109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:18.686758   64109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:18.696573   64109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:18.697474   64109 kubeconfig.go:125] found "default-k8s-diff-port-733270" server: "https://192.168.72.134:8444"
	I1009 20:17:18.699424   64109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:18.708661   64109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.134
	I1009 20:17:18.708693   64109 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:18.708705   64109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:18.708756   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.747781   64109 cri.go:89] found id: ""
	I1009 20:17:18.747852   64109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:18.765293   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:18.776296   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:18.776315   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:18.776363   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:17:18.785075   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:18.785132   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:18.794089   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:17:18.802663   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:18.802710   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:18.811834   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.820562   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:18.820611   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.829603   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:17:18.838162   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:18.838214   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:18.847131   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:18.856597   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:18.963398   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.093311   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.129878409s)
	I1009 20:17:20.093347   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.311144   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.405808   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.500323   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:20.500417   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.001420   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.501473   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.000842   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:19.581480   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:22.081200   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:20.133416   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:20.133841   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:20.133872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:20.133789   65185 retry.go:31] will retry after 1.900366713s: waiting for machine to come up
	I1009 20:17:22.036076   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:22.036465   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:22.036499   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:22.036421   65185 retry.go:31] will retry after 2.419471311s: waiting for machine to come up
	I1009 20:17:24.458015   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:24.458410   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:24.458441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:24.458379   65185 retry.go:31] will retry after 2.284501028s: waiting for machine to come up
	I1009 20:17:22.500576   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.517320   64109 api_server.go:72] duration metric: took 2.016990608s to wait for apiserver process to appear ...
	I1009 20:17:22.517349   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:17:22.517371   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.392466   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.392500   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.392516   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.432214   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.432243   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.518413   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.537284   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:25.537328   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.017494   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.022548   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.022581   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.518206   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.523173   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.523198   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:27.017735   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:27.022557   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:17:27.031462   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:27.031486   64109 api_server.go:131] duration metric: took 4.514131072s to wait for apiserver health ...
	I1009 20:17:27.031494   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:27.031500   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:27.033659   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:27.035055   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:27.045141   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:27.062887   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:27.070777   64109 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:27.070810   64109 system_pods.go:61] "coredns-7c65d6cfc9-vz7nx" [c9474b15-ac87-4b81-a239-6f4f3563c708] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:27.070820   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [ef686f1a-21a5-4058-b8ca-6e719415d778] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:27.070833   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [60a13042-6ddb-41c9-993b-a351aad64ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:27.070842   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [d876ca14-7014-4891-965a-83cadccc4416] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:27.070848   64109 system_pods.go:61] "kube-proxy-zr4bl" [4545b380-2d43-415a-97aa-c245a19d8aff] Running
	I1009 20:17:27.070859   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [d2ff89d7-03cf-430c-aa64-278d800d7fa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:27.070870   64109 system_pods.go:61] "metrics-server-6867b74b74-8p24l" [133ac2dc-236a-4ad6-886a-33b132ff5b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:27.070890   64109 system_pods.go:61] "storage-provisioner" [b82a4bd2-62d3-4eee-b17c-c0ae22b2bd3b] Running
	I1009 20:17:27.070902   64109 system_pods.go:74] duration metric: took 7.993626ms to wait for pod list to return data ...
	I1009 20:17:27.070914   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:27.074265   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:27.074290   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:27.074301   64109 node_conditions.go:105] duration metric: took 3.379591ms to run NodePressure ...
	I1009 20:17:27.074327   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:27.337687   64109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342418   64109 kubeadm.go:739] kubelet initialised
	I1009 20:17:27.342438   64109 kubeadm.go:740] duration metric: took 4.72219ms waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342446   64109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:27.347265   64109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.351569   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351587   64109 pod_ready.go:82] duration metric: took 4.298933ms for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.351595   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351600   64109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.355636   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355657   64109 pod_ready.go:82] duration metric: took 4.050576ms for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.355666   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355672   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.359739   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359758   64109 pod_ready.go:82] duration metric: took 4.080099ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.359767   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359773   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.466469   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466514   64109 pod_ready.go:82] duration metric: took 106.729243ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.466530   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466546   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:24.081959   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.581477   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.744084   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:26.744443   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:26.744468   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:26.744421   65185 retry.go:31] will retry after 2.772640247s: waiting for machine to come up
	I1009 20:17:29.519542   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:29.519877   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:29.519897   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:29.519854   65185 retry.go:31] will retry after 5.534511505s: waiting for machine to come up
	I1009 20:17:27.866362   64109 pod_ready.go:93] pod "kube-proxy-zr4bl" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:27.866389   64109 pod_ready.go:82] duration metric: took 399.82454ms for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.866401   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:29.872414   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.872979   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:29.081836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.580784   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.520055   63427 start.go:364] duration metric: took 1m0.914393022s to acquireMachinesLock for "no-preload-480205"
	I1009 20:17:36.520112   63427 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:36.520120   63427 fix.go:54] fixHost starting: 
	I1009 20:17:36.520550   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:36.520590   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:36.541113   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1009 20:17:36.541505   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:36.542133   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:17:36.542161   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:36.542522   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:36.542701   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:36.542849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:17:36.544749   63427 fix.go:112] recreateIfNeeded on no-preload-480205: state=Stopped err=<nil>
	I1009 20:17:36.544774   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	W1009 20:17:36.544962   63427 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:36.546948   63427 out.go:177] * Restarting existing kvm2 VM for "no-preload-480205" ...
	I1009 20:17:34.373083   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.373497   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:35.056703   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057338   64287 main.go:141] libmachine: (old-k8s-version-169021) Found IP for machine: 192.168.61.119
	I1009 20:17:35.057370   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has current primary IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057378   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserving static IP address...
	I1009 20:17:35.057996   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.058019   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserved static IP address: 192.168.61.119
	I1009 20:17:35.058036   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | skip adding static IP to network mk-old-k8s-version-169021 - found existing host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"}
	I1009 20:17:35.058052   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Getting to WaitForSSH function...
	I1009 20:17:35.058069   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting for SSH to be available...
	I1009 20:17:35.060324   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060560   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.060586   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060678   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH client type: external
	I1009 20:17:35.060702   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa (-rw-------)
	I1009 20:17:35.060735   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:35.060750   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | About to run SSH command:
	I1009 20:17:35.060766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | exit 0
	I1009 20:17:35.183369   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:35.183732   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:17:35.184294   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.186404   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186691   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.186728   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186912   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:17:35.187139   64287 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:35.187158   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:35.187361   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.189504   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189784   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.189814   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189904   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.190057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190169   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190309   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.190422   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.190610   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.190626   64287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:35.295510   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:35.295543   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295782   64287 buildroot.go:166] provisioning hostname "old-k8s-version-169021"
	I1009 20:17:35.295804   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295994   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.298548   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.298930   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.298964   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.299120   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.299266   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299418   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299547   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.299737   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.299899   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.299912   64287 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169021 && echo "old-k8s-version-169021" | sudo tee /etc/hostname
	I1009 20:17:35.426217   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169021
	
	I1009 20:17:35.426246   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.428993   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.429348   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429554   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.429728   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.429885   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.430012   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.430164   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.430365   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.430391   64287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:35.544070   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:35.544098   64287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:35.544136   64287 buildroot.go:174] setting up certificates
	I1009 20:17:35.544146   64287 provision.go:84] configureAuth start
	I1009 20:17:35.544155   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.544420   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.547109   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547419   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.547451   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547618   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.549441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549724   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.549757   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549894   64287 provision.go:143] copyHostCerts
	I1009 20:17:35.549945   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:35.549955   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:35.550007   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:35.550109   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:35.550119   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:35.550139   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:35.550201   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:35.550207   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:35.550224   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:35.550274   64287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169021 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-169021]
	I1009 20:17:35.892413   64287 provision.go:177] copyRemoteCerts
	I1009 20:17:35.892470   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:35.892492   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.894921   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895231   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.895262   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895409   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.895585   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.895750   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.895870   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:35.978537   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:36.003667   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:17:36.029724   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:36.053321   64287 provision.go:87] duration metric: took 509.163583ms to configureAuth
	I1009 20:17:36.053347   64287 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:36.053517   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:17:36.053589   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.056411   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.056740   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.056769   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.057023   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.057214   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057396   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057533   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.057684   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.057847   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.057862   64287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:36.281284   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:36.281316   64287 machine.go:96] duration metric: took 1.094164441s to provisionDockerMachine
	I1009 20:17:36.281327   64287 start.go:293] postStartSetup for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:17:36.281339   64287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:36.281386   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.281686   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:36.281711   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.284445   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.284825   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284990   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.285132   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.285255   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.285405   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.370146   64287 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:36.374951   64287 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:36.374972   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:36.375040   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:36.375158   64287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:36.375286   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:36.384857   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:36.407811   64287 start.go:296] duration metric: took 126.472907ms for postStartSetup
	I1009 20:17:36.407852   64287 fix.go:56] duration metric: took 23.68797707s for fixHost
	I1009 20:17:36.407875   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.410584   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.410949   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.410979   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.411118   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.411292   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411461   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411593   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.411768   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.411943   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.411966   64287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:36.519849   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505056.472929841
	
	I1009 20:17:36.519877   64287 fix.go:216] guest clock: 1728505056.472929841
	I1009 20:17:36.519887   64287 fix.go:229] Guest: 2024-10-09 20:17:36.472929841 +0000 UTC Remote: 2024-10-09 20:17:36.407856716 +0000 UTC m=+231.827419064 (delta=65.073125ms)
	I1009 20:17:36.519944   64287 fix.go:200] guest clock delta is within tolerance: 65.073125ms
	I1009 20:17:36.519956   64287 start.go:83] releasing machines lock for "old-k8s-version-169021", held for 23.800110205s
	I1009 20:17:36.520000   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.520321   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:36.523296   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523653   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.523701   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523890   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524453   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524658   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524781   64287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:36.524822   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.524855   64287 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:36.524883   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.527948   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528030   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528336   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528362   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528389   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528414   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528670   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528681   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528874   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.528880   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.529031   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529035   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529170   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.529191   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.634262   64287 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:36.640126   64287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:36.794481   64287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:36.801536   64287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:36.801615   64287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:36.825211   64287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:36.825237   64287 start.go:495] detecting cgroup driver to use...
	I1009 20:17:36.825299   64287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:36.842016   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:36.861052   64287 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:36.861112   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:36.878185   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:36.892044   64287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:37.010989   64287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:37.181313   64287 docker.go:233] disabling docker service ...
	I1009 20:17:37.181373   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:37.201726   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:37.218403   64287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:37.330869   64287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:37.458670   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:37.474832   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:37.496062   64287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:17:37.496111   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.509926   64287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:37.509984   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.527671   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.543857   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.554871   64287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:37.566057   64287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:37.578675   64287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:37.578757   64287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:37.593475   64287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:37.608210   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:37.756273   64287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:37.857693   64287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:37.857759   64287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:37.863522   64287 start.go:563] Will wait 60s for crictl version
	I1009 20:17:37.863561   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:37.868216   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:37.908445   64287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:37.908519   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.939400   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.971447   64287 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:17:36.548231   63427 main.go:141] libmachine: (no-preload-480205) Calling .Start
	I1009 20:17:36.548387   63427 main.go:141] libmachine: (no-preload-480205) Ensuring networks are active...
	I1009 20:17:36.549099   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network default is active
	I1009 20:17:36.549384   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network mk-no-preload-480205 is active
	I1009 20:17:36.549760   63427 main.go:141] libmachine: (no-preload-480205) Getting domain xml...
	I1009 20:17:36.550533   63427 main.go:141] libmachine: (no-preload-480205) Creating domain...
	I1009 20:17:37.839932   63427 main.go:141] libmachine: (no-preload-480205) Waiting to get IP...
	I1009 20:17:37.840843   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:37.841295   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:37.841405   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:37.841286   65353 retry.go:31] will retry after 306.803832ms: waiting for machine to come up
	I1009 20:17:33.581531   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.080661   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:38.083154   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:37.972687   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:37.975928   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976352   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:37.976382   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976637   64287 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:37.980809   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:37.993206   64287 kubeadm.go:883] updating cluster {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:37.993359   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:17:37.993402   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:38.043755   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:38.043813   64287 ssh_runner.go:195] Run: which lz4
	I1009 20:17:38.048189   64287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:38.052553   64287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:38.052584   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:17:38.374526   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.376238   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.874242   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:40.874269   64109 pod_ready.go:82] duration metric: took 13.007861108s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:40.874282   64109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:38.149878   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.150291   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.150317   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.150240   65353 retry.go:31] will retry after 331.657929ms: waiting for machine to come up
	I1009 20:17:38.483773   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.484236   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.484259   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.484184   65353 retry.go:31] will retry after 320.466882ms: waiting for machine to come up
	I1009 20:17:38.806862   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.807342   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.807370   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.807304   65353 retry.go:31] will retry after 515.558491ms: waiting for machine to come up
	I1009 20:17:39.324105   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:39.324656   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:39.324687   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:39.324624   65353 retry.go:31] will retry after 742.624052ms: waiting for machine to come up
	I1009 20:17:40.068871   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.069333   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.069361   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.069242   65353 retry.go:31] will retry after 627.591329ms: waiting for machine to come up
	I1009 20:17:40.698046   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.698539   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.698590   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.698482   65353 retry.go:31] will retry after 1.099340902s: waiting for machine to come up
	I1009 20:17:41.799879   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:41.800304   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:41.800334   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:41.800260   65353 retry.go:31] will retry after 954.068599ms: waiting for machine to come up
	I1009 20:17:42.756258   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:42.756730   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:42.756756   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:42.756692   65353 retry.go:31] will retry after 1.483165135s: waiting for machine to come up
	I1009 20:17:40.581834   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:42.583105   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:39.710338   64287 crio.go:462] duration metric: took 1.662187364s to copy over tarball
	I1009 20:17:39.710411   64287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:42.694067   64287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.983621241s)
	I1009 20:17:42.694097   64287 crio.go:469] duration metric: took 2.98372831s to extract the tarball
	I1009 20:17:42.694106   64287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:42.739749   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:42.782349   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:42.782374   64287 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:42.782447   64287 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.782474   64287 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.782512   64287 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.782544   64287 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:17:42.782549   64287 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.782732   64287 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.782486   64287 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.782788   64287 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.784992   64287 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.785024   64287 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.784995   64287 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.785000   64287 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.785007   64287 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.785070   64287 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:17:42.785030   64287 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.785471   64287 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.936283   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.937808   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.960488   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.971814   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:17:42.977796   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.004153   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.014701   64287 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:17:43.014748   64287 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.014795   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.025133   64287 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:17:43.025170   64287 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.025204   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086484   64287 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:17:43.086512   64287 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:17:43.086532   64287 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.086541   64287 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:17:43.086579   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086581   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.097814   64287 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:17:43.097859   64287 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.097909   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103497   64287 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:17:43.103532   64287 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.103548   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.103569   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103677   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.103745   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.103799   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.105640   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.203854   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.220635   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.220670   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.220793   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.232794   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.232901   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.232905   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.389992   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.390038   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.389991   64287 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:17:43.390081   64287 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.390097   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.390112   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.390166   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.390187   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.390247   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.475244   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:17:43.536485   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:17:43.536569   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.538738   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:17:43.538812   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:17:43.538863   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:17:43.538880   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.597357   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:17:43.597449   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.630702   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.668841   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:17:44.007657   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:44.151174   64287 cache_images.go:92] duration metric: took 1.368780539s to LoadCachedImages
	W1009 20:17:44.151263   64287 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1009 20:17:44.151285   64287 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1009 20:17:44.151432   64287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-169021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:44.151500   64287 ssh_runner.go:195] Run: crio config
	I1009 20:17:44.208126   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:17:44.208148   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:44.208165   64287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:44.208183   64287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169021 NodeName:old-k8s-version-169021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:17:44.208361   64287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-169021"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:44.208437   64287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:17:44.218743   64287 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:44.218813   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:44.228160   64287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 20:17:44.245304   64287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:44.262787   64287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:17:44.280742   64287 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:44.285502   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:44.299434   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:44.427216   64287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:44.445239   64287 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021 for IP: 192.168.61.119
	I1009 20:17:44.445262   64287 certs.go:194] generating shared ca certs ...
	I1009 20:17:44.445282   64287 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:44.445454   64287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:44.445516   64287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:44.445538   64287 certs.go:256] generating profile certs ...
	I1009 20:17:44.445663   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key
	I1009 20:17:44.445728   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192
	I1009 20:17:44.445780   64287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key
	I1009 20:17:44.445920   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:44.445961   64287 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:44.445976   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:44.446008   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:44.446041   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:44.446074   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:44.446130   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:44.446993   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:44.498205   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:44.525945   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:44.572216   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:44.614281   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:17:42.881058   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:45.654206   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.242356   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:44.242846   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:44.242873   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:44.242792   65353 retry.go:31] will retry after 1.589482004s: waiting for machine to come up
	I1009 20:17:45.834679   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:45.835135   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:45.835176   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:45.835093   65353 retry.go:31] will retry after 1.757206304s: waiting for machine to come up
	I1009 20:17:47.593468   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:47.593954   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:47.593987   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:47.593889   65353 retry.go:31] will retry after 2.938063418s: waiting for machine to come up
	I1009 20:17:45.082377   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:47.581271   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.661644   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:44.695246   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:44.719043   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:44.743825   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:44.768013   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:44.793698   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:44.819945   64287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:44.840340   64287 ssh_runner.go:195] Run: openssl version
	I1009 20:17:44.847883   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:44.858853   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863657   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863707   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.871190   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:44.885414   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:44.900030   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904894   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904958   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.912406   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:44.925128   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:44.936358   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940937   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940995   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.946995   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:44.958154   64287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:44.962846   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:44.968749   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:44.974659   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:44.980867   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:44.986827   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:44.992741   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:44.998932   64287 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:44.999030   64287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:44.999107   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.037766   64287 cri.go:89] found id: ""
	I1009 20:17:45.037847   64287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:45.050640   64287 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:45.050661   64287 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:45.050717   64287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:45.061420   64287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:45.062835   64287 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:17:45.063886   64287 kubeconfig.go:62] /home/jenkins/minikube-integration/19780-9412/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169021" cluster setting kubeconfig missing "old-k8s-version-169021" context setting]
	I1009 20:17:45.065224   64287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:45.137319   64287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:45.149285   64287 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1009 20:17:45.149318   64287 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:45.149331   64287 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:45.149386   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.191415   64287 cri.go:89] found id: ""
	I1009 20:17:45.191494   64287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:45.208982   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:45.219143   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:45.219166   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:45.219219   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:17:45.229113   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:45.229199   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:45.239745   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:17:45.249766   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:45.249844   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:45.260185   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.271441   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:45.271500   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.281343   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:17:45.291026   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:45.291094   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:45.301052   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:45.311369   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:45.520151   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.097892   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.359594   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.466328   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.574255   64287 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:46.574365   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.574634   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.074595   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.575187   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.074428   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.880869   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:49.881585   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.381306   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.535997   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:50.536376   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:50.536400   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:50.536340   65353 retry.go:31] will retry after 3.744305095s: waiting for machine to come up
	I1009 20:17:49.581868   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.080469   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:50.575160   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.075457   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.574838   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.075036   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.075071   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.575204   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.074552   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.574415   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.284206   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.284770   63427 main.go:141] libmachine: (no-preload-480205) Found IP for machine: 192.168.39.162
	I1009 20:17:54.284795   63427 main.go:141] libmachine: (no-preload-480205) Reserving static IP address...
	I1009 20:17:54.284809   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has current primary IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.285276   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.285315   63427 main.go:141] libmachine: (no-preload-480205) DBG | skip adding static IP to network mk-no-preload-480205 - found existing host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"}
	I1009 20:17:54.285330   63427 main.go:141] libmachine: (no-preload-480205) Reserved static IP address: 192.168.39.162
	I1009 20:17:54.285344   63427 main.go:141] libmachine: (no-preload-480205) Waiting for SSH to be available...
	I1009 20:17:54.285356   63427 main.go:141] libmachine: (no-preload-480205) DBG | Getting to WaitForSSH function...
	I1009 20:17:54.287561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287809   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.287838   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287920   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH client type: external
	I1009 20:17:54.287947   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa (-rw-------)
	I1009 20:17:54.287988   63427 main.go:141] libmachine: (no-preload-480205) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:54.288001   63427 main.go:141] libmachine: (no-preload-480205) DBG | About to run SSH command:
	I1009 20:17:54.288014   63427 main.go:141] libmachine: (no-preload-480205) DBG | exit 0
	I1009 20:17:54.414835   63427 main.go:141] libmachine: (no-preload-480205) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:54.415251   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetConfigRaw
	I1009 20:17:54.415965   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.418617   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.418968   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.418992   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.419252   63427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/config.json ...
	I1009 20:17:54.419452   63427 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:54.419470   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:54.419664   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.421796   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422088   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.422120   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422233   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.422406   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422550   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422839   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.423013   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.423242   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.423254   63427 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:54.531462   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:54.531497   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531718   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:17:54.531744   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531956   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.534433   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534788   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.534816   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.535138   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535286   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535418   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.535601   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.535774   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.535785   63427 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-480205 && echo "no-preload-480205" | sudo tee /etc/hostname
	I1009 20:17:54.659155   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-480205
	
	I1009 20:17:54.659228   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.661958   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662288   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.662313   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662511   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.662681   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662842   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662987   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.663179   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.663354   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.663370   63427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-480205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-480205/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-480205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:54.779856   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:54.779881   63427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:54.779916   63427 buildroot.go:174] setting up certificates
	I1009 20:17:54.779926   63427 provision.go:84] configureAuth start
	I1009 20:17:54.779935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.780180   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.782673   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783013   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.783045   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783171   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.785450   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785780   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.785807   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785945   63427 provision.go:143] copyHostCerts
	I1009 20:17:54.786024   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:54.786041   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:54.786107   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:54.786282   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:54.786294   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:54.786327   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:54.786402   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:54.786412   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:54.786439   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:54.786503   63427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.no-preload-480205 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-480205]
	I1009 20:17:54.929212   63427 provision.go:177] copyRemoteCerts
	I1009 20:17:54.929265   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:54.929292   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.931970   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932355   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.932402   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932506   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.932693   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.932849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.932979   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.017690   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:55.042746   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:17:55.066760   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:55.094790   63427 provision.go:87] duration metric: took 314.853512ms to configureAuth
	I1009 20:17:55.094830   63427 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:55.095022   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:55.095125   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.097730   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098041   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.098078   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098257   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.098452   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098647   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098764   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.098926   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.099111   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.099129   63427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:55.325505   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:55.325552   63427 machine.go:96] duration metric: took 906.085773ms to provisionDockerMachine
	I1009 20:17:55.325565   63427 start.go:293] postStartSetup for "no-preload-480205" (driver="kvm2")
	I1009 20:17:55.325576   63427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:55.325596   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.325884   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:55.325911   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.328326   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328595   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.328622   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.328920   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.329086   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.329197   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.413322   63427 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:55.417428   63427 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:55.417451   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:55.417531   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:55.417634   63427 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:55.417758   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:55.426893   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:55.451335   63427 start.go:296] duration metric: took 125.757549ms for postStartSetup
	I1009 20:17:55.451372   63427 fix.go:56] duration metric: took 18.931252408s for fixHost
	I1009 20:17:55.451395   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.453854   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454177   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.454222   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454403   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.454581   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454734   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454872   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.455026   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.455241   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.455254   63427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:55.564201   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505075.515960663
	
	I1009 20:17:55.564224   63427 fix.go:216] guest clock: 1728505075.515960663
	I1009 20:17:55.564232   63427 fix.go:229] Guest: 2024-10-09 20:17:55.515960663 +0000 UTC Remote: 2024-10-09 20:17:55.451376872 +0000 UTC m=+362.436821917 (delta=64.583791ms)
	I1009 20:17:55.564249   63427 fix.go:200] guest clock delta is within tolerance: 64.583791ms
	I1009 20:17:55.564254   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 19.044164758s
	I1009 20:17:55.564274   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.564496   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:55.567139   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567524   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.567561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567654   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568134   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568307   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568372   63427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:55.568415   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.568499   63427 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:55.568524   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.571019   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571293   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571450   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571475   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571592   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571724   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571746   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.571897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571898   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572039   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.572048   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.572151   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572272   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.651437   63427 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:55.678289   63427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:55.826507   63427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:55.832338   63427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:55.832394   63427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:55.849232   63427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:55.849252   63427 start.go:495] detecting cgroup driver to use...
	I1009 20:17:55.849312   63427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:55.865490   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:55.880814   63427 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:55.880881   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:55.895380   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:55.911341   63427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:56.029690   63427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:56.206998   63427 docker.go:233] disabling docker service ...
	I1009 20:17:56.207078   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:56.223617   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:56.236949   63427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:56.357461   63427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:56.472412   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:56.486622   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:56.505189   63427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:56.505273   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.515661   63427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:56.515714   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.525699   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.535795   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.545864   63427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:56.555956   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.565864   63427 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.584950   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.596337   63427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:56.605878   63427 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:56.605945   63427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:56.618105   63427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:56.627474   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:56.763925   63427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:56.866705   63427 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:56.866766   63427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:56.871946   63427 start.go:563] Will wait 60s for crictl version
	I1009 20:17:56.871990   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:56.875978   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:56.920375   63427 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:56.920497   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.950584   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.983562   63427 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:54.883016   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:57.380454   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.984723   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:56.987544   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.987870   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:56.987896   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.988102   63427 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:56.992229   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:57.005052   63427 kubeadm.go:883] updating cluster {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:57.005203   63427 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:57.005261   63427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:57.048383   63427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:57.048405   63427 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:57.048449   63427 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.048493   63427 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.048528   63427 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.048551   63427 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1009 20:17:57.048554   63427 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.048460   63427 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.048669   63427 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.048543   63427 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049897   63427 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.049914   63427 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049917   63427 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.049899   63427 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.049966   63427 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.049968   63427 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1009 20:17:57.210906   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.216003   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.221539   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.238277   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.249962   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.251926   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.264094   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1009 20:17:57.278956   63427 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1009 20:17:57.279003   63427 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.279053   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.326574   63427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1009 20:17:57.326623   63427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.326667   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.356980   63427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1009 20:17:57.356999   63427 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1009 20:17:57.357024   63427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.357028   63427 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.357079   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.357082   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394166   63427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1009 20:17:57.394211   63427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.394308   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394202   63427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1009 20:17:57.394363   63427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.394409   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.504627   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.504669   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.504677   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.504795   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.504866   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.504808   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.653815   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.653864   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.653922   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.653938   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.653976   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.654008   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798466   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798526   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.798603   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.798638   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.798712   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.798725   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.919528   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1009 20:17:57.919602   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1009 20:17:57.919636   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.919668   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:17:57.923759   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1009 20:17:57.923835   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1009 20:17:57.923861   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1009 20:17:57.923841   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:17:57.923900   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:17:57.923908   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1009 20:17:57.923937   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:17:57.923979   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:17:57.933344   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1009 20:17:57.933364   63427 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.933384   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1009 20:17:57.933397   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.936970   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1009 20:17:57.937013   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1009 20:17:57.937014   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1009 20:17:57.937039   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1009 20:17:54.082018   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.581605   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:55.074932   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:55.575354   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.074536   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.575341   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.074580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.574737   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.074743   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.574712   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.074570   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.575178   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.381986   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.879741   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:58.234930   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.729993   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.796562811s)
	I1009 20:18:01.730032   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1009 20:18:01.730055   63427 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730053   63427 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.495090196s)
	I1009 20:18:01.730094   63427 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1009 20:18:01.730108   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730128   63427 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.730171   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:59.082693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.581215   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:00.075413   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:00.575344   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.074463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.574495   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.075077   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.074427   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.574544   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.075436   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.575477   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.881048   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.881675   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:03.709225   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.979095477s)
	I1009 20:18:03.709263   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1009 20:18:03.709270   63427 ssh_runner.go:235] Completed: which crictl: (1.979078895s)
	I1009 20:18:03.709293   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709328   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709331   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677348   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.967992224s)
	I1009 20:18:05.677442   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677451   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.968100259s)
	I1009 20:18:05.677472   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1009 20:18:05.677506   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.677576   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.717053   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:07.172029   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.454939952s)
	I1009 20:18:07.172088   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 20:18:07.172034   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.49443869s)
	I1009 20:18:07.172161   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1009 20:18:07.172184   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:07.172184   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:07.172274   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:03.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:06.082185   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.075031   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:05.574523   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.075121   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.575359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.074417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.574532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.075315   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.575052   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.075089   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.575013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.881820   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:09.882824   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:12.381749   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:08.827862   63427 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.655655014s)
	I1009 20:18:08.827897   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.655597185s)
	I1009 20:18:08.827906   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1009 20:18:08.827911   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1009 20:18:08.827943   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:08.828002   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:11.127762   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.299736339s)
	I1009 20:18:11.127795   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1009 20:18:11.127828   63427 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.127896   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.778998   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 20:18:11.779046   63427 cache_images.go:123] Successfully loaded all cached images
	I1009 20:18:11.779052   63427 cache_images.go:92] duration metric: took 14.730635989s to LoadCachedImages
	I1009 20:18:11.779086   63427 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.1 crio true true} ...
	I1009 20:18:11.779200   63427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-480205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:18:11.779290   63427 ssh_runner.go:195] Run: crio config
	I1009 20:18:11.823810   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:11.823835   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:11.823850   63427 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:18:11.823868   63427 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-480205 NodeName:no-preload-480205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:18:11.823998   63427 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-480205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:18:11.824053   63427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:18:11.834380   63427 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:18:11.834447   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:18:11.843217   63427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:18:11.860171   63427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:18:11.877082   63427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1009 20:18:11.894719   63427 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1009 20:18:11.898508   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:11.910913   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:12.036793   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:12.054850   63427 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205 for IP: 192.168.39.162
	I1009 20:18:12.054872   63427 certs.go:194] generating shared ca certs ...
	I1009 20:18:12.054891   63427 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:12.055079   63427 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:18:12.055135   63427 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:18:12.055147   63427 certs.go:256] generating profile certs ...
	I1009 20:18:12.055233   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.key
	I1009 20:18:12.055290   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key.d4bac337
	I1009 20:18:12.055346   63427 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key
	I1009 20:18:12.055484   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:18:12.055518   63427 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:18:12.055531   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:18:12.055563   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:18:12.055589   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:18:12.055622   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:18:12.055685   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:18:12.056362   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:18:12.098363   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:18:12.138215   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:18:12.163505   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:18:12.197000   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:18:12.226922   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:18:12.260018   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:18:12.283078   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:18:12.306681   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:18:12.329290   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:12.351909   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:18:12.374738   63427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:12.392628   63427 ssh_runner.go:195] Run: openssl version
	I1009 20:18:12.398243   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:18:12.408796   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413145   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413227   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.419056   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:12.429807   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:12.440638   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445248   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445304   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.450971   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:12.461763   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:18:12.472078   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476832   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476883   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.482732   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:12.493739   63427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:12.498128   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:18:12.504533   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:18:12.510838   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:18:12.517106   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:18:12.522836   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:18:12.528387   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:18:12.533860   63427 kubeadm.go:392] StartCluster: {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:12.533939   63427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:12.533974   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.573392   63427 cri.go:89] found id: ""
	I1009 20:18:12.573459   63427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:12.584594   63427 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:18:12.584615   63427 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:18:12.584660   63427 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:18:12.595656   63427 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:18:12.596797   63427 kubeconfig.go:125] found "no-preload-480205" server: "https://192.168.39.162:8443"
	I1009 20:18:12.598877   63427 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:18:12.608274   63427 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1009 20:18:12.608299   63427 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:18:12.608310   63427 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:18:12.608369   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.644925   63427 cri.go:89] found id: ""
	I1009 20:18:12.644992   63427 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:18:12.661468   63427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:18:12.671087   63427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:18:12.671107   63427 kubeadm.go:157] found existing configuration files:
	
	I1009 20:18:12.671152   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:18:12.679852   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:18:12.679915   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:18:12.688829   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:18:12.697279   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:18:12.697334   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:18:12.705785   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.714620   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:18:12.714657   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.722966   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:18:12.730999   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:18:12.731047   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:18:12.739970   63427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:18:12.748980   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:12.857890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:08.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:11.081976   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:10.075093   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.574417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.075214   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.574669   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.075388   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.575377   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.075087   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.574793   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.074494   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.574845   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.880777   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:17.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:13.727010   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:13.942433   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.021021   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.144829   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:18:14.144918   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.645875   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.145872   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.184998   63427 api_server.go:72] duration metric: took 1.040165861s to wait for apiserver process to appear ...
	I1009 20:18:15.185034   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:18:15.185059   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:15.185680   63427 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I1009 20:18:15.685984   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:13.581243   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:16.079884   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:18.081998   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:15.074778   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.575349   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.074510   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.074650   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.574725   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.075359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.575302   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.074611   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.575097   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.286022   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.286048   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.286066   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.311734   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.311764   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.685256   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.689903   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:18.689930   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.185432   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.191636   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:19.191661   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.685910   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.690518   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:18:19.696742   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:18:19.696769   63427 api_server.go:131] duration metric: took 4.511726583s to wait for apiserver health ...
	I1009 20:18:19.696777   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:19.696783   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:19.698684   63427 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:18:19.700003   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:18:19.712555   63427 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:18:19.731708   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:18:19.740770   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:18:19.740800   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:19.740808   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:19.740817   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:19.740823   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:19.740829   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:18:19.740835   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:19.740842   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:18:19.740848   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:18:19.740860   63427 system_pods.go:74] duration metric: took 9.132657ms to wait for pod list to return data ...
	I1009 20:18:19.740867   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:18:19.744292   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:18:19.744314   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:18:19.744329   63427 node_conditions.go:105] duration metric: took 3.45695ms to run NodePressure ...
	I1009 20:18:19.744346   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:20.036577   63427 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040661   63427 kubeadm.go:739] kubelet initialised
	I1009 20:18:20.040683   63427 kubeadm.go:740] duration metric: took 4.08281ms waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040692   63427 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:20.047699   63427 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.052483   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052504   63427 pod_ready.go:82] duration metric: took 4.782367ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.052511   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052518   63427 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.056863   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056892   63427 pod_ready.go:82] duration metric: took 4.363688ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.056903   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056911   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.061762   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061786   63427 pod_ready.go:82] duration metric: took 4.867975ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.061796   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061804   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.135742   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135769   63427 pod_ready.go:82] duration metric: took 73.952718ms for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.135779   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135785   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.534419   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534449   63427 pod_ready.go:82] duration metric: took 398.656543ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.534459   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534466   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.935390   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935416   63427 pod_ready.go:82] duration metric: took 400.943577ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.935426   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935432   63427 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:21.336052   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336081   63427 pod_ready.go:82] duration metric: took 400.640044ms for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:21.336093   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336102   63427 pod_ready.go:39] duration metric: took 1.295400779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:21.336122   63427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:18:21.349596   63427 ops.go:34] apiserver oom_adj: -16
	I1009 20:18:21.349616   63427 kubeadm.go:597] duration metric: took 8.764995466s to restartPrimaryControlPlane
	I1009 20:18:21.349624   63427 kubeadm.go:394] duration metric: took 8.815768617s to StartCluster
	I1009 20:18:21.349639   63427 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.349716   63427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:18:21.351335   63427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.351607   63427 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:21.351692   63427 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:21.351813   63427 addons.go:69] Setting storage-provisioner=true in profile "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting metrics-server=true in profile "no-preload-480205"
	I1009 20:18:21.351832   63427 addons.go:234] Setting addon storage-provisioner=true in "no-preload-480205"
	I1009 20:18:21.351836   63427 addons.go:234] Setting addon metrics-server=true in "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting default-storageclass=true in profile "no-preload-480205"
	I1009 20:18:21.351845   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:18:21.351883   63427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-480205"
	W1009 20:18:21.351840   63427 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:18:21.351986   63427 host.go:66] Checking if "no-preload-480205" exists ...
	W1009 20:18:21.351843   63427 addons.go:243] addon metrics-server should already be in state true
	I1009 20:18:21.352071   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.352345   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352389   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352398   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352424   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352457   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352489   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.353957   63427 out.go:177] * Verifying Kubernetes components...
	I1009 20:18:21.355218   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:21.371429   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I1009 20:18:21.371808   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.372342   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.372372   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.372777   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.372988   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.376878   63427 addons.go:234] Setting addon default-storageclass=true in "no-preload-480205"
	W1009 20:18:21.376899   63427 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:18:21.376926   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.377284   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.377323   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.390054   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I1009 20:18:21.390616   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I1009 20:18:21.391127   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391270   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391803   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.391830   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392008   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.392033   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392208   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392359   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392734   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.392776   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.392957   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.393001   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.397090   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1009 20:18:21.397605   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.398086   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.398105   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.398405   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.398921   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.398966   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.408719   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1009 20:18:21.408929   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1009 20:18:21.409048   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409326   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409582   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409594   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409876   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409893   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409956   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410100   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.410223   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410564   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.412097   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.412300   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.414239   63427 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:21.414326   63427 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:18:19.381608   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.415507   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:18:21.415525   63427 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.415530   63427 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:18:21.415536   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:21.415548   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.415549   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.417045   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I1009 20:18:21.417788   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.418610   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.418626   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.418981   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419016   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.419279   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.419611   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.419631   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419760   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.419897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.420028   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.420123   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.420454   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420758   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.420943   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.420963   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420969   63427 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.420989   63427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:21.421002   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.421193   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.421373   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.421545   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.421675   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.423520   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425058   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.425099   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.425124   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425247   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.425381   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.425511   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.558337   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:21.587934   63427 node_ready.go:35] waiting up to 6m0s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:21.692866   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.705177   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:18:21.705201   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:18:21.724872   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.796761   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:18:21.796789   63427 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:18:21.846162   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:21.846187   63427 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:18:21.880785   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:22.146852   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.146879   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147190   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147241   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147254   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.147266   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.147280   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147532   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147534   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147591   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.161873   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.161893   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.162134   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.162156   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.162162   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966531   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24162682s)
	I1009 20:18:22.966588   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966603   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966536   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.085706223s)
	I1009 20:18:22.966699   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966712   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966892   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.966932   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.966939   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966947   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966954   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967001   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967020   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967040   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967073   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.967086   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967234   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967258   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967332   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967342   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967356   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967365   63427 addons.go:475] Verifying addon metrics-server=true in "no-preload-480205"
	I1009 20:18:22.969240   63427 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1009 20:18:22.970479   63427 addons.go:510] duration metric: took 1.618800365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1009 20:18:20.580980   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:22.581407   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:20.075155   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:20.575362   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.074859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.574637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.074532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.574916   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.075357   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.574640   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.074579   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.574711   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.879983   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:26.380696   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:23.592071   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:26.091763   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:24.581861   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:27.082730   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:25.075032   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:25.575412   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.075470   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.574434   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.074827   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.074653   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.575222   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.075440   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.575192   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.880597   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:28.592011   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:29.091688   63427 node_ready.go:49] node "no-preload-480205" has status "Ready":"True"
	I1009 20:18:29.091710   63427 node_ready.go:38] duration metric: took 7.503746219s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:29.091719   63427 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:29.097050   63427 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101164   63427 pod_ready.go:93] pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.101185   63427 pod_ready.go:82] duration metric: took 4.107489ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101195   63427 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105318   63427 pod_ready.go:93] pod "etcd-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.105337   63427 pod_ready.go:82] duration metric: took 4.133854ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105348   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108895   63427 pod_ready.go:93] pod "kube-apiserver-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.108910   63427 pod_ready.go:82] duration metric: took 3.556306ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108920   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.114777   63427 pod_ready.go:103] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.615669   63427 pod_ready.go:93] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.615692   63427 pod_ready.go:82] duration metric: took 2.506765342s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.615703   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620649   63427 pod_ready.go:93] pod "kube-proxy-vbpbk" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.620670   63427 pod_ready.go:82] duration metric: took 4.959968ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620682   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892060   63427 pod_ready.go:93] pod "kube-scheduler-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.892081   63427 pod_ready.go:82] duration metric: took 271.38787ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892089   63427 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.580683   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.581273   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.075304   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:30.574688   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.075159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.574404   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.074889   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.575136   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.074459   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.574779   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.074797   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.574832   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.380854   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.880599   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.899462   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.397489   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.582344   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.081582   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.074501   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:35.574403   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.075399   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.575034   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.074714   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.574446   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.074619   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.574644   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.074530   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.574700   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.881601   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.380041   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.380712   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.397848   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.398202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.400630   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.582883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:41.080905   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.074863   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:40.575174   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.075008   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.574859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.074972   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.574851   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.074805   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.575033   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.074718   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.575423   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.880876   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.881328   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:44.898897   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:47.399335   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:43.581383   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.081078   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:48.081422   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:45.074591   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:45.575195   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.075303   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.575186   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:46.575288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:46.614320   64287 cri.go:89] found id: ""
	I1009 20:18:46.614343   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.614351   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:46.614357   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:46.614402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:46.646355   64287 cri.go:89] found id: ""
	I1009 20:18:46.646384   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.646395   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:46.646403   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:46.646450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:46.678758   64287 cri.go:89] found id: ""
	I1009 20:18:46.678788   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.678798   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:46.678805   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:46.678859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:46.721469   64287 cri.go:89] found id: ""
	I1009 20:18:46.721496   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.721507   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:46.721514   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:46.721573   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:46.759822   64287 cri.go:89] found id: ""
	I1009 20:18:46.759853   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.759861   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:46.759866   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:46.759923   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:46.798221   64287 cri.go:89] found id: ""
	I1009 20:18:46.798250   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.798261   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:46.798268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:46.798327   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:46.832044   64287 cri.go:89] found id: ""
	I1009 20:18:46.832067   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.832075   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:46.832080   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:46.832143   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:46.865003   64287 cri.go:89] found id: ""
	I1009 20:18:46.865030   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.865041   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:46.865051   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:46.865066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:46.916927   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:46.916964   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:46.930547   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:46.930576   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:47.042476   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:47.042501   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:47.042516   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:47.116701   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:47.116732   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:48.888593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:51.380593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.899106   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:52.397825   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:50.580775   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:53.081256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.659335   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:49.672837   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:49.672906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:49.709722   64287 cri.go:89] found id: ""
	I1009 20:18:49.709750   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.709761   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:49.709769   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:49.709827   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:49.741187   64287 cri.go:89] found id: ""
	I1009 20:18:49.741209   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.741216   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:49.741221   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:49.741278   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:49.782564   64287 cri.go:89] found id: ""
	I1009 20:18:49.782593   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.782603   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:49.782610   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:49.782667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:49.820586   64287 cri.go:89] found id: ""
	I1009 20:18:49.820618   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.820628   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:49.820634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:49.820688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:49.854573   64287 cri.go:89] found id: ""
	I1009 20:18:49.854600   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.854608   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:49.854615   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:49.854672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:49.889947   64287 cri.go:89] found id: ""
	I1009 20:18:49.889976   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.889986   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:49.889993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:49.890049   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:49.925309   64287 cri.go:89] found id: ""
	I1009 20:18:49.925339   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.925350   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:49.925357   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:49.925432   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:49.961993   64287 cri.go:89] found id: ""
	I1009 20:18:49.962019   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.962029   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:49.962039   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:49.962053   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:50.051610   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:50.051642   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:50.092363   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:50.092388   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:50.145606   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:50.145639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:50.160017   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:50.160047   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:50.231984   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:52.733040   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:52.748018   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:52.748075   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:52.789413   64287 cri.go:89] found id: ""
	I1009 20:18:52.789440   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.789452   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:52.789458   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:52.789514   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:52.823188   64287 cri.go:89] found id: ""
	I1009 20:18:52.823219   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.823229   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:52.823237   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:52.823305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:52.858675   64287 cri.go:89] found id: ""
	I1009 20:18:52.858704   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.858716   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:52.858724   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:52.858782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:52.893243   64287 cri.go:89] found id: ""
	I1009 20:18:52.893277   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.893287   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:52.893295   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:52.893363   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:52.928209   64287 cri.go:89] found id: ""
	I1009 20:18:52.928240   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.928248   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:52.928255   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:52.928314   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:52.962418   64287 cri.go:89] found id: ""
	I1009 20:18:52.962446   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.962455   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:52.962461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:52.962510   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:52.996276   64287 cri.go:89] found id: ""
	I1009 20:18:52.996304   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.996315   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:52.996322   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:52.996380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:53.029693   64287 cri.go:89] found id: ""
	I1009 20:18:53.029718   64287 logs.go:282] 0 containers: []
	W1009 20:18:53.029728   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:53.029738   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:53.029752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:53.042690   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:53.042713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:53.114114   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:53.114132   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:53.114143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:53.192280   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:53.192314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:53.230392   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:53.230416   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:53.380621   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.881245   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:54.399437   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:56.900141   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.580802   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:58.082285   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.781562   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:55.795951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:55.796017   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:55.836037   64287 cri.go:89] found id: ""
	I1009 20:18:55.836065   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.836074   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:55.836080   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:55.836126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:55.870534   64287 cri.go:89] found id: ""
	I1009 20:18:55.870564   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.870574   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:55.870580   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:55.870647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:55.906415   64287 cri.go:89] found id: ""
	I1009 20:18:55.906438   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.906447   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:55.906454   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:55.906507   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:55.943387   64287 cri.go:89] found id: ""
	I1009 20:18:55.943414   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.943424   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:55.943431   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:55.943489   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:55.977004   64287 cri.go:89] found id: ""
	I1009 20:18:55.977027   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.977036   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:55.977044   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:55.977120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:56.015608   64287 cri.go:89] found id: ""
	I1009 20:18:56.015634   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.015648   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:56.015654   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:56.015703   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:56.049324   64287 cri.go:89] found id: ""
	I1009 20:18:56.049355   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.049366   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:56.049375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:56.049428   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:56.084914   64287 cri.go:89] found id: ""
	I1009 20:18:56.084937   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.084946   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:56.084955   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:56.084975   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:56.098176   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:56.098197   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:56.178386   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:56.178403   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:56.178414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:56.256547   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:56.256582   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:56.294138   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:56.294170   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:58.851568   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:58.865845   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:58.865902   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:58.904144   64287 cri.go:89] found id: ""
	I1009 20:18:58.904169   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.904177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:58.904194   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:58.904267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:58.936739   64287 cri.go:89] found id: ""
	I1009 20:18:58.936769   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.936780   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:58.936790   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:58.936848   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:58.971592   64287 cri.go:89] found id: ""
	I1009 20:18:58.971623   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.971631   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:58.971638   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:58.971690   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:59.007176   64287 cri.go:89] found id: ""
	I1009 20:18:59.007205   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.007228   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:59.007234   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:59.007283   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:59.041760   64287 cri.go:89] found id: ""
	I1009 20:18:59.041789   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.041800   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:59.041807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:59.041865   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:59.077912   64287 cri.go:89] found id: ""
	I1009 20:18:59.077940   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.077951   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:59.077958   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:59.078014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:59.110669   64287 cri.go:89] found id: ""
	I1009 20:18:59.110701   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.110712   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:59.110720   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:59.110799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:59.144869   64287 cri.go:89] found id: ""
	I1009 20:18:59.144897   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.144907   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:59.144917   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:59.144952   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:59.229014   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:59.229054   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:59.272687   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:59.272725   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:59.328090   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:59.328123   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:59.342264   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:59.342294   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:59.419880   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:58.379973   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.381314   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.382266   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:59.398378   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.898047   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.581003   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.581660   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.920869   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:01.933620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:01.933685   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:01.967549   64287 cri.go:89] found id: ""
	I1009 20:19:01.967577   64287 logs.go:282] 0 containers: []
	W1009 20:19:01.967585   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:01.967590   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:01.967675   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:02.005465   64287 cri.go:89] found id: ""
	I1009 20:19:02.005491   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.005500   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:02.005505   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:02.005558   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:02.038140   64287 cri.go:89] found id: ""
	I1009 20:19:02.038162   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.038170   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:02.038176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:02.038219   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:02.070394   64287 cri.go:89] found id: ""
	I1009 20:19:02.070423   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.070434   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:02.070442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:02.070505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:02.110634   64287 cri.go:89] found id: ""
	I1009 20:19:02.110655   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.110663   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:02.110669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:02.110723   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:02.166408   64287 cri.go:89] found id: ""
	I1009 20:19:02.166445   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.166457   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:02.166467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:02.166541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:02.218816   64287 cri.go:89] found id: ""
	I1009 20:19:02.218846   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.218856   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:02.218862   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:02.218914   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:02.265090   64287 cri.go:89] found id: ""
	I1009 20:19:02.265118   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.265130   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:02.265140   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:02.265156   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:02.278134   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:02.278160   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:02.348422   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:02.348453   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:02.348467   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:02.429614   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:02.429651   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:02.469100   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:02.469132   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:04.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.881374   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:04.397774   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.402923   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.081386   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:07.580670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.020914   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:05.034760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:05.034833   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:05.071078   64287 cri.go:89] found id: ""
	I1009 20:19:05.071109   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.071120   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:05.071128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:05.071190   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:05.105517   64287 cri.go:89] found id: ""
	I1009 20:19:05.105545   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.105553   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:05.105558   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:05.105607   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:05.139601   64287 cri.go:89] found id: ""
	I1009 20:19:05.139624   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.139632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:05.139637   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:05.139682   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:05.174329   64287 cri.go:89] found id: ""
	I1009 20:19:05.174351   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.174359   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:05.174365   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:05.174410   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:05.212336   64287 cri.go:89] found id: ""
	I1009 20:19:05.212368   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.212377   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:05.212383   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:05.212464   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:05.251822   64287 cri.go:89] found id: ""
	I1009 20:19:05.251844   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.251851   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:05.251857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:05.251901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:05.291055   64287 cri.go:89] found id: ""
	I1009 20:19:05.291097   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.291106   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:05.291111   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:05.291160   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:05.327223   64287 cri.go:89] found id: ""
	I1009 20:19:05.327248   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.327256   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:05.327266   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:05.327281   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:05.377047   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:05.377086   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:05.391232   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:05.391263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:05.464815   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:05.464837   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:05.464850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:05.542581   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:05.542616   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:08.084504   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:08.100466   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:08.100535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:08.138451   64287 cri.go:89] found id: ""
	I1009 20:19:08.138481   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.138489   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:08.138494   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:08.138551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:08.176839   64287 cri.go:89] found id: ""
	I1009 20:19:08.176867   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.176877   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:08.176884   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:08.176941   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:08.234435   64287 cri.go:89] found id: ""
	I1009 20:19:08.234461   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.234472   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:08.234479   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:08.234544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:08.270727   64287 cri.go:89] found id: ""
	I1009 20:19:08.270753   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.270764   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:08.270771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:08.270831   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:08.305139   64287 cri.go:89] found id: ""
	I1009 20:19:08.305167   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.305177   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:08.305185   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:08.305237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:08.338153   64287 cri.go:89] found id: ""
	I1009 20:19:08.338197   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.338209   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:08.338217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:08.338272   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:08.376046   64287 cri.go:89] found id: ""
	I1009 20:19:08.376073   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.376081   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:08.376087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:08.376144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:08.416555   64287 cri.go:89] found id: ""
	I1009 20:19:08.416595   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.416606   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:08.416617   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:08.416630   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:08.470868   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:08.470898   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:08.486601   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:08.486623   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:08.563325   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:08.563363   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:08.563378   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:08.643743   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:08.643778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:09.380849   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.881773   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:08.898969   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.399277   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:09.580913   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.581693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.197637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:11.210992   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:11.211078   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:11.248309   64287 cri.go:89] found id: ""
	I1009 20:19:11.248331   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.248339   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:11.248345   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:11.248388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:11.282511   64287 cri.go:89] found id: ""
	I1009 20:19:11.282537   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.282546   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:11.282551   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:11.282603   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:11.319447   64287 cri.go:89] found id: ""
	I1009 20:19:11.319473   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.319480   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:11.319486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:11.319543   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:11.353838   64287 cri.go:89] found id: ""
	I1009 20:19:11.353866   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.353879   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:11.353887   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:11.353951   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:11.395257   64287 cri.go:89] found id: ""
	I1009 20:19:11.395288   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.395300   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:11.395309   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:11.395373   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:11.434406   64287 cri.go:89] found id: ""
	I1009 20:19:11.434430   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.434438   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:11.434445   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:11.434506   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:11.468162   64287 cri.go:89] found id: ""
	I1009 20:19:11.468184   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.468192   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:11.468197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:11.468252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:11.500214   64287 cri.go:89] found id: ""
	I1009 20:19:11.500247   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.500257   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:11.500267   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:11.500282   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:11.566430   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:11.566449   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:11.566463   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:11.642784   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:11.642815   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:11.680882   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:11.680908   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:11.731386   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:11.731414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.245696   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:14.258882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:14.258948   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:14.293339   64287 cri.go:89] found id: ""
	I1009 20:19:14.293365   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.293372   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:14.293379   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:14.293424   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:14.327246   64287 cri.go:89] found id: ""
	I1009 20:19:14.327268   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.327275   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:14.327287   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:14.327334   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:14.366384   64287 cri.go:89] found id: ""
	I1009 20:19:14.366412   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.366423   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:14.366430   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:14.366498   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:14.403913   64287 cri.go:89] found id: ""
	I1009 20:19:14.403950   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.403958   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:14.403965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:14.404021   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:14.442655   64287 cri.go:89] found id: ""
	I1009 20:19:14.442684   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.442694   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:14.442702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:14.442749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:14.477895   64287 cri.go:89] found id: ""
	I1009 20:19:14.477921   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.477928   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:14.477934   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:14.477979   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:14.512833   64287 cri.go:89] found id: ""
	I1009 20:19:14.512871   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.512882   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:14.512889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:14.512955   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:14.546557   64287 cri.go:89] found id: ""
	I1009 20:19:14.546582   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.546590   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:14.546597   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:14.546610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:14.599579   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:14.599610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.613347   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:14.613371   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:14.380816   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.879793   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.399353   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:15.899223   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.584162   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.081179   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:14.689272   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:14.689295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:14.689306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:14.770362   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:14.770394   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:17.312105   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:17.326851   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:17.326906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:17.364760   64287 cri.go:89] found id: ""
	I1009 20:19:17.364785   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.364793   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:17.364799   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:17.364851   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:17.398149   64287 cri.go:89] found id: ""
	I1009 20:19:17.398172   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.398181   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:17.398189   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:17.398247   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:17.432746   64287 cri.go:89] found id: ""
	I1009 20:19:17.432778   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.432789   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:17.432797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:17.432846   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:17.468095   64287 cri.go:89] found id: ""
	I1009 20:19:17.468125   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.468137   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:17.468145   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:17.468206   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:17.503152   64287 cri.go:89] found id: ""
	I1009 20:19:17.503184   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.503196   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:17.503203   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:17.503257   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:17.543966   64287 cri.go:89] found id: ""
	I1009 20:19:17.543993   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.544002   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:17.544008   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:17.544077   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:17.582780   64287 cri.go:89] found id: ""
	I1009 20:19:17.582801   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.582809   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:17.582814   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:17.582860   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:17.621907   64287 cri.go:89] found id: ""
	I1009 20:19:17.621933   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.621942   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:17.621951   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:17.621963   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:17.674239   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:17.674271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:17.688301   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:17.688331   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:17.759965   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:17.759989   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:17.760005   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:17.836052   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:17.836087   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:18.880033   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:21.381550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.399116   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.898441   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:22.899243   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.581486   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:23.081145   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.380237   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:20.393343   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:20.393409   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:20.427462   64287 cri.go:89] found id: ""
	I1009 20:19:20.427491   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.427501   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:20.427509   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:20.427560   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:20.463708   64287 cri.go:89] found id: ""
	I1009 20:19:20.463736   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.463747   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:20.463754   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:20.463818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:20.497898   64287 cri.go:89] found id: ""
	I1009 20:19:20.497924   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.497931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:20.497937   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:20.497985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:20.531880   64287 cri.go:89] found id: ""
	I1009 20:19:20.531910   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.531918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:20.531923   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:20.531971   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:20.565291   64287 cri.go:89] found id: ""
	I1009 20:19:20.565319   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.565330   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:20.565342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:20.565390   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:20.604786   64287 cri.go:89] found id: ""
	I1009 20:19:20.604815   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.604827   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:20.604835   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:20.604891   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:20.646136   64287 cri.go:89] found id: ""
	I1009 20:19:20.646161   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.646169   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:20.646175   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:20.646231   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:20.687503   64287 cri.go:89] found id: ""
	I1009 20:19:20.687527   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.687540   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:20.687548   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:20.687560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:20.738026   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:20.738057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:20.751432   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:20.751459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:20.826192   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:20.826219   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:20.826239   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:20.905874   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:20.905900   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.445277   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:23.460245   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:23.460305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:23.503559   64287 cri.go:89] found id: ""
	I1009 20:19:23.503582   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.503590   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:23.503596   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:23.503652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:23.542748   64287 cri.go:89] found id: ""
	I1009 20:19:23.542783   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.542791   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:23.542797   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:23.542857   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:23.585668   64287 cri.go:89] found id: ""
	I1009 20:19:23.585689   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.585696   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:23.585702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:23.585753   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:23.623863   64287 cri.go:89] found id: ""
	I1009 20:19:23.623884   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.623891   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:23.623897   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:23.623952   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:23.657025   64287 cri.go:89] found id: ""
	I1009 20:19:23.657049   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.657057   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:23.657063   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:23.657120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:23.692536   64287 cri.go:89] found id: ""
	I1009 20:19:23.692573   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.692583   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:23.692590   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:23.692657   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:23.732552   64287 cri.go:89] found id: ""
	I1009 20:19:23.732580   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.732591   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:23.732599   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:23.732645   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:23.767308   64287 cri.go:89] found id: ""
	I1009 20:19:23.767345   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.767356   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:23.767366   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:23.767380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:23.780909   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:23.780948   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:23.853312   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:23.853340   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:23.853355   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:23.934930   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:23.934968   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.977906   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:23.977943   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:23.881669   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.380447   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.397833   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.398843   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.082071   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.580992   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.530146   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:26.545527   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:26.545598   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:26.580942   64287 cri.go:89] found id: ""
	I1009 20:19:26.580970   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.580981   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:26.580988   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:26.581050   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:26.621165   64287 cri.go:89] found id: ""
	I1009 20:19:26.621188   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.621195   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:26.621201   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:26.621245   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:26.655664   64287 cri.go:89] found id: ""
	I1009 20:19:26.655690   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.655697   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:26.655703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:26.655749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:26.691951   64287 cri.go:89] found id: ""
	I1009 20:19:26.691973   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.691981   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:26.691987   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:26.692033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:26.728905   64287 cri.go:89] found id: ""
	I1009 20:19:26.728937   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.728948   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:26.728955   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:26.729013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:26.763673   64287 cri.go:89] found id: ""
	I1009 20:19:26.763697   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.763705   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:26.763711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:26.763765   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:26.798507   64287 cri.go:89] found id: ""
	I1009 20:19:26.798535   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.798547   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:26.798554   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:26.798615   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:26.836114   64287 cri.go:89] found id: ""
	I1009 20:19:26.836140   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.836148   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:26.836156   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:26.836169   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:26.914136   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:26.914160   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:26.914175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:26.995023   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:26.995055   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:27.033788   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:27.033817   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:27.084313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:27.084341   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.597899   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:29.611695   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:29.611756   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:28.381564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.881085   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.899697   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.398514   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.081670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.580939   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.646690   64287 cri.go:89] found id: ""
	I1009 20:19:29.646718   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.646726   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:29.646732   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:29.646780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:29.681379   64287 cri.go:89] found id: ""
	I1009 20:19:29.681408   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.681418   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:29.681425   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:29.681481   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:29.717988   64287 cri.go:89] found id: ""
	I1009 20:19:29.718012   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.718020   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:29.718026   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:29.718076   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:29.752783   64287 cri.go:89] found id: ""
	I1009 20:19:29.752815   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.752825   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:29.752833   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:29.752883   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:29.786079   64287 cri.go:89] found id: ""
	I1009 20:19:29.786105   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.786114   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:29.786120   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:29.786167   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:29.820630   64287 cri.go:89] found id: ""
	I1009 20:19:29.820655   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.820663   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:29.820669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:29.820727   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:29.855992   64287 cri.go:89] found id: ""
	I1009 20:19:29.856022   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.856033   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:29.856040   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:29.856096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:29.891196   64287 cri.go:89] found id: ""
	I1009 20:19:29.891224   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.891234   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:29.891244   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:29.891257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:29.945636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:29.945665   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.959715   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:29.959741   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:30.034023   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:30.034046   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:30.034066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:30.109512   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:30.109545   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.651252   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:32.665196   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:32.665253   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:32.701468   64287 cri.go:89] found id: ""
	I1009 20:19:32.701497   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.701516   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:32.701525   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:32.701581   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:32.740585   64287 cri.go:89] found id: ""
	I1009 20:19:32.740611   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.740623   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:32.740629   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:32.740699   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:32.773765   64287 cri.go:89] found id: ""
	I1009 20:19:32.773792   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.773803   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:32.773810   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:32.773869   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:32.812647   64287 cri.go:89] found id: ""
	I1009 20:19:32.812680   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.812695   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:32.812702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:32.812752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:32.847044   64287 cri.go:89] found id: ""
	I1009 20:19:32.847092   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.847101   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:32.847107   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:32.847153   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:32.885410   64287 cri.go:89] found id: ""
	I1009 20:19:32.885439   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.885448   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:32.885455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:32.885515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:32.922917   64287 cri.go:89] found id: ""
	I1009 20:19:32.922944   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.922955   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:32.922963   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:32.923026   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:32.958993   64287 cri.go:89] found id: ""
	I1009 20:19:32.959019   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.959027   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:32.959037   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:32.959052   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.996844   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:32.996871   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:33.047684   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:33.047715   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:33.061829   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:33.061856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:33.135278   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:33.135302   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:33.135314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:33.380221   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.380648   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:34.897646   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:36.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.081326   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:37.580347   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.722479   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:35.736670   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:35.736745   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:35.778594   64287 cri.go:89] found id: ""
	I1009 20:19:35.778617   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.778625   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:35.778630   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:35.778677   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:35.810906   64287 cri.go:89] found id: ""
	I1009 20:19:35.810934   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.810945   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:35.810954   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:35.811014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:35.846226   64287 cri.go:89] found id: ""
	I1009 20:19:35.846258   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.846269   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:35.846277   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:35.846325   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:35.880509   64287 cri.go:89] found id: ""
	I1009 20:19:35.880536   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.880547   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:35.880555   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:35.880613   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:35.916039   64287 cri.go:89] found id: ""
	I1009 20:19:35.916067   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.916077   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:35.916085   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:35.916142   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:35.948068   64287 cri.go:89] found id: ""
	I1009 20:19:35.948099   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.948107   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:35.948113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:35.948168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:35.982531   64287 cri.go:89] found id: ""
	I1009 20:19:35.982556   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.982565   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:35.982571   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:35.982618   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:36.016284   64287 cri.go:89] found id: ""
	I1009 20:19:36.016307   64287 logs.go:282] 0 containers: []
	W1009 20:19:36.016314   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:36.016324   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:36.016333   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:36.096773   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:36.096807   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:36.135382   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:36.135408   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:36.189157   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:36.189189   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:36.202243   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:36.202272   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:36.289968   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:38.790894   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:38.804960   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:38.805020   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:38.840867   64287 cri.go:89] found id: ""
	I1009 20:19:38.840891   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.840898   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:38.840904   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:38.840961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:38.877659   64287 cri.go:89] found id: ""
	I1009 20:19:38.877686   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.877695   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:38.877709   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:38.877768   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:38.917914   64287 cri.go:89] found id: ""
	I1009 20:19:38.917938   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.917947   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:38.917954   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:38.918011   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:38.955879   64287 cri.go:89] found id: ""
	I1009 20:19:38.955907   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.955918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:38.955925   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:38.955985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:38.991683   64287 cri.go:89] found id: ""
	I1009 20:19:38.991712   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.991723   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:38.991730   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:38.991815   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:39.026167   64287 cri.go:89] found id: ""
	I1009 20:19:39.026192   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.026199   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:39.026205   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:39.026273   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:39.061646   64287 cri.go:89] found id: ""
	I1009 20:19:39.061676   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.061692   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:39.061699   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:39.061760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:39.097660   64287 cri.go:89] found id: ""
	I1009 20:19:39.097687   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.097696   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:39.097706   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:39.097720   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:39.149199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:39.149232   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:39.162366   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:39.162391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:39.237267   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:39.237295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:39.237310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:39.320531   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:39.320566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:37.882355   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:40.380792   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.381234   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:38.899362   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.397980   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:39.580565   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.081212   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.865807   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:41.880948   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:41.881015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:41.917675   64287 cri.go:89] found id: ""
	I1009 20:19:41.917703   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.917714   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:41.917722   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:41.917780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:41.957152   64287 cri.go:89] found id: ""
	I1009 20:19:41.957180   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.957189   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:41.957194   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:41.957250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:42.008129   64287 cri.go:89] found id: ""
	I1009 20:19:42.008153   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.008162   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:42.008170   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:42.008232   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:42.042628   64287 cri.go:89] found id: ""
	I1009 20:19:42.042651   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.042658   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:42.042669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:42.042712   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:42.080123   64287 cri.go:89] found id: ""
	I1009 20:19:42.080147   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.080155   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:42.080161   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:42.080214   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:42.120070   64287 cri.go:89] found id: ""
	I1009 20:19:42.120099   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.120108   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:42.120114   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:42.120161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:42.153686   64287 cri.go:89] found id: ""
	I1009 20:19:42.153717   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.153727   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:42.153735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:42.153805   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:42.187793   64287 cri.go:89] found id: ""
	I1009 20:19:42.187820   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.187832   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:42.187842   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:42.187856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:42.267510   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:42.267545   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:42.267559   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:42.348061   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:42.348095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:42.393407   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:42.393431   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:42.448547   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:42.448580   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:44.381312   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:46.881511   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:43.398743   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:45.398982   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.898041   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.081990   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.963603   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:44.977341   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:44.977417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:45.018729   64287 cri.go:89] found id: ""
	I1009 20:19:45.018756   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.018764   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:45.018770   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:45.018821   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:45.055232   64287 cri.go:89] found id: ""
	I1009 20:19:45.055259   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.055267   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:45.055273   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:45.055332   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:45.090575   64287 cri.go:89] found id: ""
	I1009 20:19:45.090604   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.090614   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:45.090620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:45.090692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:45.126426   64287 cri.go:89] found id: ""
	I1009 20:19:45.126452   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.126459   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:45.126465   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:45.126523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:45.166192   64287 cri.go:89] found id: ""
	I1009 20:19:45.166223   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.166232   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:45.166239   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:45.166301   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:45.200353   64287 cri.go:89] found id: ""
	I1009 20:19:45.200384   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.200400   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:45.200406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:45.200454   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:45.235696   64287 cri.go:89] found id: ""
	I1009 20:19:45.235729   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.235740   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:45.235747   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:45.235807   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:45.271937   64287 cri.go:89] found id: ""
	I1009 20:19:45.271969   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.271979   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:45.271990   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:45.272004   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:45.347600   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:45.347635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:45.392203   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:45.392229   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:45.444012   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:45.444045   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:45.458106   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:45.458130   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:45.540275   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.041410   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:48.057834   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:48.057889   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:48.094318   64287 cri.go:89] found id: ""
	I1009 20:19:48.094346   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.094355   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:48.094362   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:48.094406   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:48.129645   64287 cri.go:89] found id: ""
	I1009 20:19:48.129672   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.129683   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:48.129691   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:48.129743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:48.164423   64287 cri.go:89] found id: ""
	I1009 20:19:48.164446   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.164454   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:48.164460   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:48.164519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:48.197708   64287 cri.go:89] found id: ""
	I1009 20:19:48.197736   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.197745   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:48.197750   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:48.197796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:48.235885   64287 cri.go:89] found id: ""
	I1009 20:19:48.235913   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.235925   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:48.235931   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:48.235995   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:48.272458   64287 cri.go:89] found id: ""
	I1009 20:19:48.272492   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.272504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:48.272513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:48.272580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:48.307152   64287 cri.go:89] found id: ""
	I1009 20:19:48.307180   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.307190   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:48.307197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:48.307255   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:48.347335   64287 cri.go:89] found id: ""
	I1009 20:19:48.347366   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.347376   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:48.347387   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:48.347401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:48.418125   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:48.418161   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:48.433361   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:48.433386   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:48.524863   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.524879   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:48.524890   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:48.612196   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:48.612247   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:49.380735   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.898962   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.899005   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.581882   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.582193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.149683   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:51.164603   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:51.164663   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:51.197120   64287 cri.go:89] found id: ""
	I1009 20:19:51.197151   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.197162   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:51.197170   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:51.197228   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:51.233612   64287 cri.go:89] found id: ""
	I1009 20:19:51.233641   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.233651   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:51.233660   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:51.233726   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:51.267119   64287 cri.go:89] found id: ""
	I1009 20:19:51.267150   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.267159   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:51.267168   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:51.267233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:51.301816   64287 cri.go:89] found id: ""
	I1009 20:19:51.301845   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.301854   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:51.301859   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:51.301917   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:51.335483   64287 cri.go:89] found id: ""
	I1009 20:19:51.335524   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.335535   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:51.335543   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:51.335604   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:51.370207   64287 cri.go:89] found id: ""
	I1009 20:19:51.370241   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.370252   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:51.370258   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:51.370320   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:51.406925   64287 cri.go:89] found id: ""
	I1009 20:19:51.406949   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.406956   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:51.406962   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:51.407015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:51.446354   64287 cri.go:89] found id: ""
	I1009 20:19:51.446378   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.446386   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:51.446394   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:51.446405   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:51.496627   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:51.496657   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:51.509587   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:51.509610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:51.583276   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:51.583295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:51.583306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:51.661552   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:51.661584   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:54.202782   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:54.227761   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:54.227829   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:54.261338   64287 cri.go:89] found id: ""
	I1009 20:19:54.261366   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.261374   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:54.261381   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:54.261435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:54.300387   64287 cri.go:89] found id: ""
	I1009 20:19:54.300414   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.300424   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:54.300429   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:54.300485   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:54.339083   64287 cri.go:89] found id: ""
	I1009 20:19:54.339110   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.339122   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:54.339129   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:54.339180   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:54.374145   64287 cri.go:89] found id: ""
	I1009 20:19:54.374174   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.374182   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:54.374188   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:54.374240   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:54.411872   64287 cri.go:89] found id: ""
	I1009 20:19:54.411904   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.411918   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:54.411926   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:54.411992   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:54.449459   64287 cri.go:89] found id: ""
	I1009 20:19:54.449493   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.449504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:54.449512   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:54.449575   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:54.482728   64287 cri.go:89] found id: ""
	I1009 20:19:54.482752   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.482762   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:54.482770   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:54.482830   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:54.516220   64287 cri.go:89] found id: ""
	I1009 20:19:54.516252   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.516261   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:54.516270   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:54.516280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:54.569531   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:54.569560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:54.583371   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:54.583395   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:53.880843   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.381025   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.399599   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.399727   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.080838   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.081451   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:54.651718   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:54.651742   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:54.651757   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:54.728869   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:54.728903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.270702   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:57.284287   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:57.284351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:57.317235   64287 cri.go:89] found id: ""
	I1009 20:19:57.317269   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.317279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:57.317290   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:57.317349   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:57.350030   64287 cri.go:89] found id: ""
	I1009 20:19:57.350058   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.350066   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:57.350071   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:57.350118   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:57.382840   64287 cri.go:89] found id: ""
	I1009 20:19:57.382867   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.382877   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:57.382884   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:57.382935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:57.417193   64287 cri.go:89] found id: ""
	I1009 20:19:57.417229   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.417239   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:57.417247   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:57.417309   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:57.456417   64287 cri.go:89] found id: ""
	I1009 20:19:57.456445   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.456454   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:57.456461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:57.456523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:57.490156   64287 cri.go:89] found id: ""
	I1009 20:19:57.490185   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.490193   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:57.490199   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:57.490246   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:57.523983   64287 cri.go:89] found id: ""
	I1009 20:19:57.524013   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.524023   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:57.524030   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:57.524093   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:57.562288   64287 cri.go:89] found id: ""
	I1009 20:19:57.562317   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.562325   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:57.562334   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:57.562345   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.602475   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:57.602502   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:57.656636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:57.656668   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:57.670738   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:57.670765   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:57.742943   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:57.742968   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:57.742979   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:58.384537   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.881670   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.897654   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.899099   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:02.899381   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.581059   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:01.081778   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.321926   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:00.335475   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:00.335546   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:00.369727   64287 cri.go:89] found id: ""
	I1009 20:20:00.369762   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.369770   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:00.369776   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:00.369823   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:00.408917   64287 cri.go:89] found id: ""
	I1009 20:20:00.408943   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.408953   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:00.408964   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:00.409013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:00.447646   64287 cri.go:89] found id: ""
	I1009 20:20:00.447676   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.447687   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:00.447694   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:00.447754   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:00.485752   64287 cri.go:89] found id: ""
	I1009 20:20:00.485780   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.485790   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:00.485797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:00.485859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:00.519568   64287 cri.go:89] found id: ""
	I1009 20:20:00.519592   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.519600   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:00.519606   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:00.519667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:00.553288   64287 cri.go:89] found id: ""
	I1009 20:20:00.553323   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.553334   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:00.553342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:00.553402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:00.593842   64287 cri.go:89] found id: ""
	I1009 20:20:00.593868   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.593875   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:00.593882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:00.593938   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:00.630808   64287 cri.go:89] found id: ""
	I1009 20:20:00.630839   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.630849   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:00.630859   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:00.630873   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:00.681858   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:00.681888   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:00.695365   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:00.695391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:00.768651   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:00.768679   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:00.768693   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:00.843999   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:00.844034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.390483   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:03.405406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:03.405476   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:03.440025   64287 cri.go:89] found id: ""
	I1009 20:20:03.440048   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.440055   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:03.440061   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:03.440113   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:03.475407   64287 cri.go:89] found id: ""
	I1009 20:20:03.475440   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.475450   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:03.475456   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:03.475511   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:03.512656   64287 cri.go:89] found id: ""
	I1009 20:20:03.512680   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.512688   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:03.512693   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:03.512749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:03.549174   64287 cri.go:89] found id: ""
	I1009 20:20:03.549204   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.549212   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:03.549217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:03.549282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:03.586093   64287 cri.go:89] found id: ""
	I1009 20:20:03.586118   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.586128   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:03.586135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:03.586201   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:03.624221   64287 cri.go:89] found id: ""
	I1009 20:20:03.624248   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.624258   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:03.624271   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:03.624342   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:03.658759   64287 cri.go:89] found id: ""
	I1009 20:20:03.658781   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.658789   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:03.658794   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:03.658850   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:03.692200   64287 cri.go:89] found id: ""
	I1009 20:20:03.692227   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.692237   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:03.692247   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:03.692263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:03.745949   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:03.745985   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:03.759691   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:03.759724   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:03.833000   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:03.833020   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:03.833034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:03.911321   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:03.911352   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.381014   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.881096   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:04.900780   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:07.398348   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:03.580442   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.582159   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:08.080528   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:06.451158   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:06.466356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:06.466435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:06.502907   64287 cri.go:89] found id: ""
	I1009 20:20:06.502936   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.502944   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:06.502950   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:06.503000   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:06.540938   64287 cri.go:89] found id: ""
	I1009 20:20:06.540961   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.540969   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:06.540974   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:06.541033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:06.575587   64287 cri.go:89] found id: ""
	I1009 20:20:06.575616   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.575632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:06.575640   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:06.575696   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:06.611052   64287 cri.go:89] found id: ""
	I1009 20:20:06.611093   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.611103   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:06.611110   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:06.611170   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:06.647763   64287 cri.go:89] found id: ""
	I1009 20:20:06.647793   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.647804   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:06.647811   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:06.647876   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:06.682423   64287 cri.go:89] found id: ""
	I1009 20:20:06.682449   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.682460   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:06.682471   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:06.682541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:06.718096   64287 cri.go:89] found id: ""
	I1009 20:20:06.718124   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.718135   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:06.718141   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:06.718200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:06.753320   64287 cri.go:89] found id: ""
	I1009 20:20:06.753344   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.753353   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:06.753361   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:06.753375   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:06.809610   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:06.809640   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:06.823651   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:06.823680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:06.895796   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:06.895819   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:06.895833   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:06.972602   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:06.972635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:09.513909   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:09.527143   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:09.527254   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:09.560406   64287 cri.go:89] found id: ""
	I1009 20:20:09.560432   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.560440   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:09.560445   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:09.560493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:09.600180   64287 cri.go:89] found id: ""
	I1009 20:20:09.600202   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.600219   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:09.600225   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:09.600285   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:08.380652   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.880056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.398968   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:11.897696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.081007   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:12.081291   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.638356   64287 cri.go:89] found id: ""
	I1009 20:20:09.638383   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.638393   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:09.638398   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:09.638450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:09.680589   64287 cri.go:89] found id: ""
	I1009 20:20:09.680616   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.680627   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:09.680635   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:09.680686   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:09.719018   64287 cri.go:89] found id: ""
	I1009 20:20:09.719041   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.719049   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:09.719054   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:09.719129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:09.757262   64287 cri.go:89] found id: ""
	I1009 20:20:09.757290   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.757298   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:09.757305   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:09.757364   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:09.796127   64287 cri.go:89] found id: ""
	I1009 20:20:09.796157   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.796168   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:09.796176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:09.796236   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:09.830650   64287 cri.go:89] found id: ""
	I1009 20:20:09.830679   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.830689   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:09.830699   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:09.830713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:09.882638   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:09.882666   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:09.897458   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:09.897488   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:09.964440   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:09.964462   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:09.964473   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:10.040103   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:10.040138   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.590159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:12.603380   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:12.603448   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:12.636246   64287 cri.go:89] found id: ""
	I1009 20:20:12.636272   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.636281   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:12.636288   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:12.636392   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:12.669400   64287 cri.go:89] found id: ""
	I1009 20:20:12.669429   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.669439   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:12.669446   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:12.669493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:12.705076   64287 cri.go:89] found id: ""
	I1009 20:20:12.705104   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.705114   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:12.705122   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:12.705198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:12.738883   64287 cri.go:89] found id: ""
	I1009 20:20:12.738914   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.738926   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:12.738933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:12.738988   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:12.773549   64287 cri.go:89] found id: ""
	I1009 20:20:12.773572   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.773580   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:12.773592   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:12.773709   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:12.813123   64287 cri.go:89] found id: ""
	I1009 20:20:12.813148   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.813156   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:12.813162   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:12.813215   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:12.851272   64287 cri.go:89] found id: ""
	I1009 20:20:12.851305   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.851317   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:12.851325   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:12.851389   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:12.891399   64287 cri.go:89] found id: ""
	I1009 20:20:12.891422   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.891429   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:12.891436   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:12.891455   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:12.945839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:12.945868   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:12.959711   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:12.959735   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:13.028015   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:13.028034   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:13.028048   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:13.108451   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:13.108491   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.881443   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.381891   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.398650   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.401925   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.580306   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.580836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.651166   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:15.664618   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:15.664692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:15.697088   64287 cri.go:89] found id: ""
	I1009 20:20:15.697117   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.697127   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:15.697137   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:15.697198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:15.738641   64287 cri.go:89] found id: ""
	I1009 20:20:15.738671   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.738682   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:15.738690   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:15.738747   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:15.771293   64287 cri.go:89] found id: ""
	I1009 20:20:15.771318   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.771326   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:15.771332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:15.771391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:15.804234   64287 cri.go:89] found id: ""
	I1009 20:20:15.804263   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.804271   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:15.804279   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:15.804329   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:15.840914   64287 cri.go:89] found id: ""
	I1009 20:20:15.840964   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.840975   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:15.840983   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:15.841041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:15.878243   64287 cri.go:89] found id: ""
	I1009 20:20:15.878270   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.878280   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:15.878288   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:15.878344   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:15.917371   64287 cri.go:89] found id: ""
	I1009 20:20:15.917398   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.917409   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:15.917416   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:15.917473   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:15.951443   64287 cri.go:89] found id: ""
	I1009 20:20:15.951470   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.951481   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:15.951490   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:15.951504   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:16.017601   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:16.017629   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:16.017643   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:16.095915   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:16.095946   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:16.141704   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:16.141737   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:16.197391   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:16.197424   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:18.712278   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:18.725451   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:18.725513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:18.757618   64287 cri.go:89] found id: ""
	I1009 20:20:18.757640   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.757650   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:18.757657   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:18.757715   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:18.791651   64287 cri.go:89] found id: ""
	I1009 20:20:18.791677   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.791686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:18.791693   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:18.791750   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:18.826402   64287 cri.go:89] found id: ""
	I1009 20:20:18.826430   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.826440   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:18.826449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:18.826522   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:18.868610   64287 cri.go:89] found id: ""
	I1009 20:20:18.868634   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.868644   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:18.868652   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:18.868710   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:18.905499   64287 cri.go:89] found id: ""
	I1009 20:20:18.905520   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.905527   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:18.905532   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:18.905588   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:18.938772   64287 cri.go:89] found id: ""
	I1009 20:20:18.938795   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.938803   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:18.938809   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:18.938855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:18.974712   64287 cri.go:89] found id: ""
	I1009 20:20:18.974742   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.974753   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:18.974760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:18.974820   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:19.008681   64287 cri.go:89] found id: ""
	I1009 20:20:19.008710   64287 logs.go:282] 0 containers: []
	W1009 20:20:19.008718   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:19.008726   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:19.008736   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:19.059862   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:19.059891   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:19.073071   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:19.073096   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:19.142163   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:19.142189   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:19.142204   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:19.226645   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:19.226691   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:17.880874   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.881553   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:18.898733   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:20.899569   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.081883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.581532   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.767167   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:21.780448   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:21.780530   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:21.813670   64287 cri.go:89] found id: ""
	I1009 20:20:21.813699   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.813708   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:21.813714   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:21.813760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:21.850793   64287 cri.go:89] found id: ""
	I1009 20:20:21.850826   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.850838   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:21.850845   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:21.850904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:21.887886   64287 cri.go:89] found id: ""
	I1009 20:20:21.887919   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.887931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:21.887938   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:21.887987   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:21.926620   64287 cri.go:89] found id: ""
	I1009 20:20:21.926651   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.926661   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:21.926669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:21.926734   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:21.962822   64287 cri.go:89] found id: ""
	I1009 20:20:21.962859   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.962867   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:21.962872   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:21.962932   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:22.001043   64287 cri.go:89] found id: ""
	I1009 20:20:22.001070   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.001080   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:22.001088   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:22.001145   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:22.034111   64287 cri.go:89] found id: ""
	I1009 20:20:22.034139   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.034148   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:22.034153   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:22.034200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:22.067601   64287 cri.go:89] found id: ""
	I1009 20:20:22.067629   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.067640   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:22.067649   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:22.067663   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:22.081545   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:22.081575   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:22.158725   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:22.158749   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:22.158761   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:22.249086   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:22.249133   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:22.287435   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:22.287462   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:24.380294   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.880564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:23.398659   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:25.399216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:27.898475   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.580818   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.838935   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:24.852057   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:24.852126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:24.887454   64287 cri.go:89] found id: ""
	I1009 20:20:24.887488   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.887500   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:24.887507   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:24.887565   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:24.928273   64287 cri.go:89] found id: ""
	I1009 20:20:24.928295   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.928303   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:24.928309   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:24.928367   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:24.962116   64287 cri.go:89] found id: ""
	I1009 20:20:24.962152   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.962164   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:24.962172   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:24.962252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:24.996909   64287 cri.go:89] found id: ""
	I1009 20:20:24.996934   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.996942   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:24.996947   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:24.996996   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:25.030615   64287 cri.go:89] found id: ""
	I1009 20:20:25.030647   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.030658   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:25.030665   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:25.030725   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:25.066069   64287 cri.go:89] found id: ""
	I1009 20:20:25.066096   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.066104   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:25.066109   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:25.066158   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:25.101762   64287 cri.go:89] found id: ""
	I1009 20:20:25.101791   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.101799   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:25.101807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:25.101854   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:25.139704   64287 cri.go:89] found id: ""
	I1009 20:20:25.139730   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.139738   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:25.139745   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:25.139756   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:25.190212   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:25.190257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:25.206181   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:25.206206   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:25.276523   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:25.276548   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:25.276562   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:25.352477   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:25.352509   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:27.894112   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:27.907965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:27.908018   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:27.942933   64287 cri.go:89] found id: ""
	I1009 20:20:27.942959   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.942967   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:27.942973   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:27.943029   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:27.995890   64287 cri.go:89] found id: ""
	I1009 20:20:27.995917   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.995929   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:27.995936   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:27.995985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:28.031877   64287 cri.go:89] found id: ""
	I1009 20:20:28.031904   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.031914   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:28.031922   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:28.031975   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:28.073691   64287 cri.go:89] found id: ""
	I1009 20:20:28.073720   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.073730   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:28.073738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:28.073796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:28.109946   64287 cri.go:89] found id: ""
	I1009 20:20:28.109975   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.109987   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:28.109995   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:28.110041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:28.144771   64287 cri.go:89] found id: ""
	I1009 20:20:28.144801   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.144822   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:28.144830   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:28.144892   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:28.179617   64287 cri.go:89] found id: ""
	I1009 20:20:28.179640   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.179647   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:28.179653   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:28.179698   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:28.213734   64287 cri.go:89] found id: ""
	I1009 20:20:28.213759   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.213767   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:28.213775   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:28.213787   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:28.227778   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:28.227803   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:28.298025   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:28.298057   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:28.298071   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:28.378664   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:28.378700   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:28.417577   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:28.417602   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:29.380480   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.382239   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.396952   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:32.399211   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:29.079718   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.083332   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.968360   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:30.981229   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:30.981295   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:31.013373   64287 cri.go:89] found id: ""
	I1009 20:20:31.013397   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.013408   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:31.013415   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:31.013468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:31.044387   64287 cri.go:89] found id: ""
	I1009 20:20:31.044408   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.044416   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:31.044421   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:31.044490   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:31.079677   64287 cri.go:89] found id: ""
	I1009 20:20:31.079702   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.079718   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:31.079727   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:31.079788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:31.118895   64287 cri.go:89] found id: ""
	I1009 20:20:31.118921   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.118933   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:31.118940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:31.118997   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:31.157008   64287 cri.go:89] found id: ""
	I1009 20:20:31.157035   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.157043   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:31.157049   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:31.157096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:31.188999   64287 cri.go:89] found id: ""
	I1009 20:20:31.189024   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.189032   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:31.189038   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:31.189095   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:31.225314   64287 cri.go:89] found id: ""
	I1009 20:20:31.225341   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.225351   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:31.225359   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:31.225426   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:31.259864   64287 cri.go:89] found id: ""
	I1009 20:20:31.259891   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.259899   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:31.259907   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:31.259918   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:31.333579   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:31.333615   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:31.375852   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:31.375884   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:31.428346   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:31.428377   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:31.442927   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:31.442951   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:31.512924   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:34.013346   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:34.026671   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:34.026729   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:34.062445   64287 cri.go:89] found id: ""
	I1009 20:20:34.062469   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.062479   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:34.062487   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:34.062586   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:34.096670   64287 cri.go:89] found id: ""
	I1009 20:20:34.096692   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.096699   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:34.096705   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:34.096752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:34.133653   64287 cri.go:89] found id: ""
	I1009 20:20:34.133682   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.133702   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:34.133711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:34.133770   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:34.167514   64287 cri.go:89] found id: ""
	I1009 20:20:34.167541   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.167552   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:34.167560   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:34.167631   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:34.200397   64287 cri.go:89] found id: ""
	I1009 20:20:34.200427   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.200438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:34.200446   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:34.200504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:34.236507   64287 cri.go:89] found id: ""
	I1009 20:20:34.236534   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.236544   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:34.236551   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:34.236611   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:34.272611   64287 cri.go:89] found id: ""
	I1009 20:20:34.272639   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.272650   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:34.272658   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:34.272733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:34.311392   64287 cri.go:89] found id: ""
	I1009 20:20:34.311417   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.311426   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:34.311434   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:34.311445   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:34.401718   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:34.401751   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:34.463768   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:34.463798   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:34.526313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:34.526347   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:34.540370   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:34.540401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:34.610697   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:33.880836   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:35.881010   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:34.399526   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.401486   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:33.581544   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.080875   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.085744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:37.111821   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:37.125012   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:37.125073   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:37.165105   64287 cri.go:89] found id: ""
	I1009 20:20:37.165135   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.165144   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:37.165151   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:37.165217   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:37.201367   64287 cri.go:89] found id: ""
	I1009 20:20:37.201393   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.201403   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:37.201412   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:37.201470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:37.234258   64287 cri.go:89] found id: ""
	I1009 20:20:37.234283   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.234291   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:37.234297   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:37.234351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:37.270765   64287 cri.go:89] found id: ""
	I1009 20:20:37.270790   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.270798   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:37.270803   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:37.270855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:37.303931   64287 cri.go:89] found id: ""
	I1009 20:20:37.303962   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.303970   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:37.303976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:37.304035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:37.339438   64287 cri.go:89] found id: ""
	I1009 20:20:37.339466   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.339476   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:37.339484   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:37.339544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:37.371538   64287 cri.go:89] found id: ""
	I1009 20:20:37.371565   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.371576   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:37.371584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:37.371644   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:37.414729   64287 cri.go:89] found id: ""
	I1009 20:20:37.414775   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.414785   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:37.414803   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:37.414818   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:37.453989   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:37.454013   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:37.504516   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:37.504551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:37.520317   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:37.520353   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:37.590144   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:37.590163   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:37.590175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:38.381407   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.381518   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.897837   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.897916   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.898202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.582744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.167604   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:40.191718   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:40.191788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:40.247439   64287 cri.go:89] found id: ""
	I1009 20:20:40.247467   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.247475   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:40.247482   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:40.247549   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:40.284012   64287 cri.go:89] found id: ""
	I1009 20:20:40.284043   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.284055   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:40.284063   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:40.284124   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:40.321347   64287 cri.go:89] found id: ""
	I1009 20:20:40.321378   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.321386   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:40.321391   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:40.321456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:40.364063   64287 cri.go:89] found id: ""
	I1009 20:20:40.364084   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.364092   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:40.364098   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:40.364152   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:40.400423   64287 cri.go:89] found id: ""
	I1009 20:20:40.400449   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.400458   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:40.400467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:40.400525   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:40.434538   64287 cri.go:89] found id: ""
	I1009 20:20:40.434567   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.434576   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:40.434584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:40.434647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:40.468860   64287 cri.go:89] found id: ""
	I1009 20:20:40.468909   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.468921   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:40.468928   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:40.468990   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:40.501583   64287 cri.go:89] found id: ""
	I1009 20:20:40.501607   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.501615   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:40.501624   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:40.501639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:40.558878   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:40.558919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:40.573191   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:40.573218   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:40.640959   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:40.640980   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:40.640996   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:40.716475   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:40.716510   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.255685   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:43.269113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:43.269182   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:43.305892   64287 cri.go:89] found id: ""
	I1009 20:20:43.305920   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.305931   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:43.305939   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:43.305999   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:43.341486   64287 cri.go:89] found id: ""
	I1009 20:20:43.341515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.341525   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:43.341532   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:43.341592   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:43.375473   64287 cri.go:89] found id: ""
	I1009 20:20:43.375496   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.375506   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:43.375513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:43.375577   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:43.411235   64287 cri.go:89] found id: ""
	I1009 20:20:43.411259   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.411268   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:43.411274   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:43.411330   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:43.444884   64287 cri.go:89] found id: ""
	I1009 20:20:43.444914   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.444926   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:43.444933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:43.444993   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:43.479151   64287 cri.go:89] found id: ""
	I1009 20:20:43.479177   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.479187   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:43.479195   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:43.479261   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:43.512485   64287 cri.go:89] found id: ""
	I1009 20:20:43.512515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.512523   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:43.512530   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:43.512580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:43.546511   64287 cri.go:89] found id: ""
	I1009 20:20:43.546533   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.546541   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:43.546549   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:43.546561   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:43.623938   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:43.623970   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.667655   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:43.667680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:43.724747   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:43.724778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:43.740060   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:43.740081   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:43.820910   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:42.880030   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:44.880596   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.880640   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.399270   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.899013   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.081796   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.580573   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.321796   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:46.337028   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:46.337086   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:46.374564   64287 cri.go:89] found id: ""
	I1009 20:20:46.374587   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.374595   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:46.374601   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:46.374662   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:46.411418   64287 cri.go:89] found id: ""
	I1009 20:20:46.411453   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.411470   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:46.411477   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:46.411535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:46.447726   64287 cri.go:89] found id: ""
	I1009 20:20:46.447750   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.447758   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:46.447763   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:46.447818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:46.484691   64287 cri.go:89] found id: ""
	I1009 20:20:46.484721   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.484731   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:46.484738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:46.484799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:46.525017   64287 cri.go:89] found id: ""
	I1009 20:20:46.525052   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.525064   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:46.525071   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:46.525129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:46.562306   64287 cri.go:89] found id: ""
	I1009 20:20:46.562334   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.562342   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:46.562350   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:46.562417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:46.598067   64287 cri.go:89] found id: ""
	I1009 20:20:46.598099   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.598110   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:46.598117   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:46.598179   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:46.639484   64287 cri.go:89] found id: ""
	I1009 20:20:46.639515   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.639526   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:46.639537   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:46.639551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:46.694106   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:46.694140   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:46.709475   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:46.709501   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:46.781281   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:46.781308   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:46.781322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:46.862224   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:46.862262   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:49.402786   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:49.417432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:49.417537   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:49.454253   64287 cri.go:89] found id: ""
	I1009 20:20:49.454286   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.454296   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:49.454305   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:49.454366   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:49.490198   64287 cri.go:89] found id: ""
	I1009 20:20:49.490223   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.490234   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:49.490241   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:49.490307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:49.524286   64287 cri.go:89] found id: ""
	I1009 20:20:49.524312   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.524322   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:49.524330   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:49.524388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:49.566415   64287 cri.go:89] found id: ""
	I1009 20:20:49.566444   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.566455   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:49.566462   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:49.566529   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:49.604306   64287 cri.go:89] found id: ""
	I1009 20:20:49.604335   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.604346   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:49.604353   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:49.604414   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:48.880756   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:51.381546   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:50.398989   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.399159   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.581256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.081420   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.638514   64287 cri.go:89] found id: ""
	I1009 20:20:49.638543   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.638560   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:49.638568   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:49.638630   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:49.672158   64287 cri.go:89] found id: ""
	I1009 20:20:49.672182   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.672191   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:49.672197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:49.672250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:49.709865   64287 cri.go:89] found id: ""
	I1009 20:20:49.709887   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.709897   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:49.709907   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:49.709919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:49.762184   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:49.762220   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:49.775852   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:49.775880   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:49.850309   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:49.850329   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:49.850343   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:49.930225   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:49.930266   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:52.470580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:52.484087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:52.484141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:52.517440   64287 cri.go:89] found id: ""
	I1009 20:20:52.517461   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.517469   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:52.517475   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:52.517519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:52.550340   64287 cri.go:89] found id: ""
	I1009 20:20:52.550380   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.550392   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:52.550399   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:52.550468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:52.586444   64287 cri.go:89] found id: ""
	I1009 20:20:52.586478   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.586488   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:52.586495   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:52.586551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:52.620461   64287 cri.go:89] found id: ""
	I1009 20:20:52.620488   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.620499   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:52.620506   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:52.620566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:52.656032   64287 cri.go:89] found id: ""
	I1009 20:20:52.656063   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.656074   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:52.656082   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:52.656144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:52.687083   64287 cri.go:89] found id: ""
	I1009 20:20:52.687110   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.687118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:52.687124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:52.687187   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:52.723413   64287 cri.go:89] found id: ""
	I1009 20:20:52.723442   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.723453   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:52.723461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:52.723521   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:52.754656   64287 cri.go:89] found id: ""
	I1009 20:20:52.754687   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.754698   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:52.754709   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:52.754721   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:52.807359   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:52.807398   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:52.821469   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:52.821500   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:52.893447   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:52.893470   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:52.893484   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:52.970051   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:52.970083   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:53.880365   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.881762   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.898472   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:57.397863   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.580495   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:56.581092   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.508078   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:55.521951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:55.522012   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:55.556291   64287 cri.go:89] found id: ""
	I1009 20:20:55.556316   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.556324   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:55.556329   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:55.556380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:55.591032   64287 cri.go:89] found id: ""
	I1009 20:20:55.591059   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.591079   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:55.591086   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:55.591141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:55.636196   64287 cri.go:89] found id: ""
	I1009 20:20:55.636228   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.636239   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:55.636246   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:55.636310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:55.673291   64287 cri.go:89] found id: ""
	I1009 20:20:55.673313   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.673321   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:55.673327   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:55.673374   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:55.709457   64287 cri.go:89] found id: ""
	I1009 20:20:55.709486   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.709497   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:55.709504   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:55.709563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:55.748391   64287 cri.go:89] found id: ""
	I1009 20:20:55.748423   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.748434   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:55.748442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:55.748503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:55.780581   64287 cri.go:89] found id: ""
	I1009 20:20:55.780610   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.780620   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:55.780627   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:55.780688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:55.816489   64287 cri.go:89] found id: ""
	I1009 20:20:55.816527   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.816535   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:55.816554   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:55.816568   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:55.871679   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:55.871708   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:55.887895   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:55.887920   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:55.956814   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:55.956838   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:55.956850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:56.031453   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:56.031489   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.569098   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:58.583558   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:58.583626   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:58.622296   64287 cri.go:89] found id: ""
	I1009 20:20:58.622326   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.622334   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:58.622340   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:58.622401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:58.663776   64287 cri.go:89] found id: ""
	I1009 20:20:58.663798   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.663806   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:58.663812   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:58.663858   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:58.699968   64287 cri.go:89] found id: ""
	I1009 20:20:58.699994   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.700002   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:58.700007   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:58.700066   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:58.733935   64287 cri.go:89] found id: ""
	I1009 20:20:58.733959   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.733968   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:58.733974   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:58.734030   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:58.768723   64287 cri.go:89] found id: ""
	I1009 20:20:58.768752   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.768763   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:58.768771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:58.768834   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:58.803129   64287 cri.go:89] found id: ""
	I1009 20:20:58.803153   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.803161   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:58.803166   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:58.803237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:58.836341   64287 cri.go:89] found id: ""
	I1009 20:20:58.836366   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.836374   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:58.836379   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:58.836437   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:58.872048   64287 cri.go:89] found id: ""
	I1009 20:20:58.872071   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.872081   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:58.872091   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:58.872106   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:58.950133   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:58.950167   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.988529   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:58.988555   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:59.038377   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:59.038414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:59.053398   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:59.053448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:59.120793   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:58.380051   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:00.380182   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:59.398592   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.898382   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:58.581266   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.081525   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.621691   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:01.634505   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:01.634563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:01.670785   64287 cri.go:89] found id: ""
	I1009 20:21:01.670818   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.670826   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:01.670833   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:01.670897   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:01.712219   64287 cri.go:89] found id: ""
	I1009 20:21:01.712243   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.712255   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:01.712261   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:01.712307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:01.747175   64287 cri.go:89] found id: ""
	I1009 20:21:01.747204   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.747215   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:01.747222   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:01.747282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:01.785359   64287 cri.go:89] found id: ""
	I1009 20:21:01.785382   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.785389   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:01.785396   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:01.785452   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:01.822385   64287 cri.go:89] found id: ""
	I1009 20:21:01.822415   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.822426   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:01.822433   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:01.822501   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:01.860839   64287 cri.go:89] found id: ""
	I1009 20:21:01.860871   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.860880   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:01.860889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:01.860935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:01.899191   64287 cri.go:89] found id: ""
	I1009 20:21:01.899215   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.899224   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:01.899232   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:01.899288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:01.936692   64287 cri.go:89] found id: ""
	I1009 20:21:01.936721   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.936729   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:01.936737   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:01.936748   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:02.014848   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:02.014883   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:02.058815   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:02.058846   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:02.110513   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:02.110543   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:02.123855   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:02.123878   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:02.193997   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:02.880277   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.881247   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:07.380330   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.398320   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.580574   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.080382   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.081294   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.694766   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:04.707675   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:04.707743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:04.741322   64287 cri.go:89] found id: ""
	I1009 20:21:04.741354   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.741365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:04.741374   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:04.741435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:04.780649   64287 cri.go:89] found id: ""
	I1009 20:21:04.780676   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.780686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:04.780694   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:04.780749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:04.817514   64287 cri.go:89] found id: ""
	I1009 20:21:04.817545   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.817557   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:04.817564   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:04.817672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:04.850848   64287 cri.go:89] found id: ""
	I1009 20:21:04.850871   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.850878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:04.850885   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:04.850942   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:04.885390   64287 cri.go:89] found id: ""
	I1009 20:21:04.885426   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.885438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:04.885449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:04.885513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:04.920199   64287 cri.go:89] found id: ""
	I1009 20:21:04.920221   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.920229   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:04.920235   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:04.920307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:04.954597   64287 cri.go:89] found id: ""
	I1009 20:21:04.954619   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.954627   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:04.954634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:04.954693   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:04.988236   64287 cri.go:89] found id: ""
	I1009 20:21:04.988262   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.988270   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:04.988278   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:04.988289   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:05.039909   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:05.039939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:05.053556   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:05.053583   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:05.126596   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:05.126618   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:05.126628   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:05.202275   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:05.202309   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:07.740836   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:07.754095   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:07.754165   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:07.786584   64287 cri.go:89] found id: ""
	I1009 20:21:07.786613   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.786621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:07.786627   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:07.786672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:07.822365   64287 cri.go:89] found id: ""
	I1009 20:21:07.822388   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.822396   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:07.822410   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:07.822456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:07.858058   64287 cri.go:89] found id: ""
	I1009 20:21:07.858083   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.858093   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:07.858100   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:07.858156   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:07.894319   64287 cri.go:89] found id: ""
	I1009 20:21:07.894345   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.894352   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:07.894358   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:07.894422   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:07.928620   64287 cri.go:89] found id: ""
	I1009 20:21:07.928648   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.928659   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:07.928667   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:07.928724   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:07.964923   64287 cri.go:89] found id: ""
	I1009 20:21:07.964956   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.964967   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:07.964976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:07.965035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:07.998308   64287 cri.go:89] found id: ""
	I1009 20:21:07.998336   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.998347   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:07.998354   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:07.998402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:08.032021   64287 cri.go:89] found id: ""
	I1009 20:21:08.032047   64287 logs.go:282] 0 containers: []
	W1009 20:21:08.032059   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:08.032070   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:08.032084   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:08.103843   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:08.103867   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:08.103882   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:08.185476   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:08.185507   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:08.226967   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:08.226994   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:08.304852   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:08.304887   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:09.389127   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:11.880856   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.399153   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.399356   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:12.897624   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.581193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:13.082124   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.819345   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:10.832902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:10.832963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:10.873237   64287 cri.go:89] found id: ""
	I1009 20:21:10.873268   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.873279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:10.873286   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:10.873350   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:10.907296   64287 cri.go:89] found id: ""
	I1009 20:21:10.907316   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.907324   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:10.907329   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:10.907377   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:10.946428   64287 cri.go:89] found id: ""
	I1009 20:21:10.946469   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.946481   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:10.946487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:10.946540   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:10.982175   64287 cri.go:89] found id: ""
	I1009 20:21:10.982199   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.982207   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:10.982212   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:10.982259   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:11.016197   64287 cri.go:89] found id: ""
	I1009 20:21:11.016220   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.016243   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:11.016250   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:11.016318   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:11.055697   64287 cri.go:89] found id: ""
	I1009 20:21:11.055723   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.055732   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:11.055740   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:11.055806   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:11.093444   64287 cri.go:89] found id: ""
	I1009 20:21:11.093469   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.093480   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:11.093487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:11.093548   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:11.133224   64287 cri.go:89] found id: ""
	I1009 20:21:11.133252   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.133266   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:11.133276   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:11.133291   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:11.189020   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:11.189057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:11.202652   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:11.202682   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:11.272789   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:11.272811   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:11.272824   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:11.354868   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:11.354904   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:13.896655   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:13.910126   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:13.910189   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:13.944472   64287 cri.go:89] found id: ""
	I1009 20:21:13.944497   64287 logs.go:282] 0 containers: []
	W1009 20:21:13.944505   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:13.944511   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:13.944566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:14.003362   64287 cri.go:89] found id: ""
	I1009 20:21:14.003387   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.003397   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:14.003407   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:14.003470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:14.037691   64287 cri.go:89] found id: ""
	I1009 20:21:14.037717   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.037726   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:14.037732   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:14.037792   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:14.079333   64287 cri.go:89] found id: ""
	I1009 20:21:14.079358   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.079368   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:14.079375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:14.079433   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:14.120821   64287 cri.go:89] found id: ""
	I1009 20:21:14.120843   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.120851   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:14.120857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:14.120904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:14.161089   64287 cri.go:89] found id: ""
	I1009 20:21:14.161118   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.161128   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:14.161135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:14.161193   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:14.201711   64287 cri.go:89] found id: ""
	I1009 20:21:14.201739   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.201748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:14.201756   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:14.201814   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:14.238469   64287 cri.go:89] found id: ""
	I1009 20:21:14.238502   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.238512   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:14.238520   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:14.238531   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:14.289786   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:14.289821   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:14.303876   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:14.303903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:14.376426   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:14.376446   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:14.376459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:14.458058   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:14.458095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:14.381278   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:16.381782   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:14.899834   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.398309   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:15.580946   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.574819   63744 pod_ready.go:82] duration metric: took 4m0.000292386s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:17.574851   63744 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:17.574882   63744 pod_ready.go:39] duration metric: took 4m14.424118915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:17.574914   63744 kubeadm.go:597] duration metric: took 4m22.465328757s to restartPrimaryControlPlane
	W1009 20:21:17.574982   63744 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:17.575016   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:17.000623   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:17.015890   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:17.015963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:17.054136   64287 cri.go:89] found id: ""
	I1009 20:21:17.054166   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.054177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:17.054185   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:17.054242   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:17.089501   64287 cri.go:89] found id: ""
	I1009 20:21:17.089538   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.089548   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:17.089556   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:17.089614   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:17.128042   64287 cri.go:89] found id: ""
	I1009 20:21:17.128066   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.128073   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:17.128079   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:17.128126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:17.164663   64287 cri.go:89] found id: ""
	I1009 20:21:17.164689   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.164697   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:17.164703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:17.164766   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:17.200865   64287 cri.go:89] found id: ""
	I1009 20:21:17.200891   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.200899   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:17.200906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:17.200963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:17.241649   64287 cri.go:89] found id: ""
	I1009 20:21:17.241675   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.241683   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:17.241690   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:17.241749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:17.277390   64287 cri.go:89] found id: ""
	I1009 20:21:17.277424   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.277436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:17.277449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:17.277515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:17.316942   64287 cri.go:89] found id: ""
	I1009 20:21:17.316973   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.316985   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:17.316995   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:17.317015   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:17.360293   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:17.360322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:17.413510   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:17.413546   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:17.427280   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:17.427310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:17.509531   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:17.509551   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:17.509566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:18.880550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.881023   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:19.398723   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:21.899259   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.092463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:20.106101   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:20.106168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:20.147889   64287 cri.go:89] found id: ""
	I1009 20:21:20.147916   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.147925   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:20.147931   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:20.147980   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:20.183097   64287 cri.go:89] found id: ""
	I1009 20:21:20.183167   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.183179   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:20.183185   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:20.183233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:20.217556   64287 cri.go:89] found id: ""
	I1009 20:21:20.217585   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.217596   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:20.217604   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:20.217661   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:20.256692   64287 cri.go:89] found id: ""
	I1009 20:21:20.256717   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.256728   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:20.256735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:20.256797   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:20.290866   64287 cri.go:89] found id: ""
	I1009 20:21:20.290888   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.290896   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:20.290902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:20.290954   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:20.326802   64287 cri.go:89] found id: ""
	I1009 20:21:20.326828   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.326836   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:20.326842   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:20.326901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:20.362395   64287 cri.go:89] found id: ""
	I1009 20:21:20.362426   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.362436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:20.362442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:20.362504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:20.408354   64287 cri.go:89] found id: ""
	I1009 20:21:20.408381   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.408391   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:20.408400   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:20.408415   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:20.426669   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:20.426694   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:20.525895   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:20.525927   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:20.525939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:20.612620   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:20.612654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:20.653152   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:20.653179   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.205516   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:23.218432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:23.218493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:23.254327   64287 cri.go:89] found id: ""
	I1009 20:21:23.254355   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.254365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:23.254372   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:23.254429   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:23.295411   64287 cri.go:89] found id: ""
	I1009 20:21:23.295437   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.295448   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:23.295463   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:23.295523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:23.331631   64287 cri.go:89] found id: ""
	I1009 20:21:23.331661   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.331672   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:23.331679   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:23.331742   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:23.366114   64287 cri.go:89] found id: ""
	I1009 20:21:23.366139   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.366147   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:23.366152   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:23.366200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:23.403549   64287 cri.go:89] found id: ""
	I1009 20:21:23.403580   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.403587   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:23.403593   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:23.403652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:23.439231   64287 cri.go:89] found id: ""
	I1009 20:21:23.439254   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.439263   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:23.439268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:23.439322   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:23.473417   64287 cri.go:89] found id: ""
	I1009 20:21:23.473441   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.473449   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:23.473455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:23.473503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:23.506129   64287 cri.go:89] found id: ""
	I1009 20:21:23.506151   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.506159   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:23.506166   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:23.506176   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:23.546813   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:23.546836   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.599317   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:23.599346   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:23.612400   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:23.612426   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:23.684905   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:23.684924   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:23.684936   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:22.881084   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:25.380780   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:27.380875   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:23.899699   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.401044   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.267079   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:26.282873   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:26.282946   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:26.319632   64287 cri.go:89] found id: ""
	I1009 20:21:26.319657   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.319665   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:26.319671   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:26.319716   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:26.362263   64287 cri.go:89] found id: ""
	I1009 20:21:26.362290   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.362299   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:26.362306   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:26.362401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:26.412274   64287 cri.go:89] found id: ""
	I1009 20:21:26.412309   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.412320   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:26.412332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:26.412391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:26.446754   64287 cri.go:89] found id: ""
	I1009 20:21:26.446774   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.446783   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:26.446788   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:26.446838   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:26.480333   64287 cri.go:89] found id: ""
	I1009 20:21:26.480359   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.480367   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:26.480375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:26.480438   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:26.518440   64287 cri.go:89] found id: ""
	I1009 20:21:26.518469   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.518479   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:26.518486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:26.518555   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:26.555100   64287 cri.go:89] found id: ""
	I1009 20:21:26.555127   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.555138   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:26.555146   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:26.555208   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:26.594515   64287 cri.go:89] found id: ""
	I1009 20:21:26.594538   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.594550   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:26.594559   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:26.594573   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:26.647465   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:26.647511   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:26.661021   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:26.661042   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:26.732233   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:26.732265   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:26.732286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:26.813104   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:26.813143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:29.361485   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:29.374578   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:29.374647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:29.409740   64287 cri.go:89] found id: ""
	I1009 20:21:29.409766   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.409774   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:29.409781   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:29.409826   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:29.443932   64287 cri.go:89] found id: ""
	I1009 20:21:29.443959   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.443970   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:29.443978   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:29.444070   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:29.485900   64287 cri.go:89] found id: ""
	I1009 20:21:29.485927   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.485935   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:29.485940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:29.485994   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:29.527976   64287 cri.go:89] found id: ""
	I1009 20:21:29.528002   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.528013   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:29.528021   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:29.528080   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:29.572186   64287 cri.go:89] found id: ""
	I1009 20:21:29.572214   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.572235   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:29.572243   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:29.572310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:29.612166   64287 cri.go:89] found id: ""
	I1009 20:21:29.612190   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.612200   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:29.612208   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:29.612267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:29.880828   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:32.380494   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:28.897535   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:31.398369   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:29.646269   64287 cri.go:89] found id: ""
	I1009 20:21:29.646294   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.646312   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:29.646319   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:29.646375   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:29.680624   64287 cri.go:89] found id: ""
	I1009 20:21:29.680649   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.680656   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:29.680663   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:29.680673   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:29.729251   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:29.729278   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:29.742746   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:29.742773   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:29.815128   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:29.815150   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:29.815164   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:29.893418   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:29.893448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.433532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:32.447090   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:32.447161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:32.482662   64287 cri.go:89] found id: ""
	I1009 20:21:32.482688   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.482696   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:32.482702   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:32.482755   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:32.521292   64287 cri.go:89] found id: ""
	I1009 20:21:32.521321   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.521329   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:32.521337   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:32.521393   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:32.555868   64287 cri.go:89] found id: ""
	I1009 20:21:32.555894   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.555901   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:32.555906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:32.555956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:32.593541   64287 cri.go:89] found id: ""
	I1009 20:21:32.593563   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.593570   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:32.593575   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:32.593632   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:32.627712   64287 cri.go:89] found id: ""
	I1009 20:21:32.627740   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.627751   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:32.627758   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:32.627816   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:32.660632   64287 cri.go:89] found id: ""
	I1009 20:21:32.660658   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.660669   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:32.660677   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:32.660733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:32.697709   64287 cri.go:89] found id: ""
	I1009 20:21:32.697737   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.697748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:32.697755   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:32.697810   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:32.734782   64287 cri.go:89] found id: ""
	I1009 20:21:32.734806   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.734816   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:32.734827   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:32.734840   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:32.809239   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:32.809271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.857109   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:32.857143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:32.915156   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:32.915185   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:32.929782   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:32.929813   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:32.996321   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:34.380798   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:36.880717   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:33.399188   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.899631   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.497013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:35.510645   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:35.510714   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:35.543840   64287 cri.go:89] found id: ""
	I1009 20:21:35.543869   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.543878   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:35.543883   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:35.543929   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:35.579206   64287 cri.go:89] found id: ""
	I1009 20:21:35.579235   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.579246   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:35.579254   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:35.579312   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:35.613362   64287 cri.go:89] found id: ""
	I1009 20:21:35.613393   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.613406   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:35.613414   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:35.613484   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:35.649553   64287 cri.go:89] found id: ""
	I1009 20:21:35.649584   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.649596   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:35.649605   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:35.649672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:35.688665   64287 cri.go:89] found id: ""
	I1009 20:21:35.688695   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.688706   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:35.688714   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:35.688771   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:35.725958   64287 cri.go:89] found id: ""
	I1009 20:21:35.725979   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.725987   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:35.725993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:35.726047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:35.758368   64287 cri.go:89] found id: ""
	I1009 20:21:35.758395   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.758405   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:35.758410   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:35.758455   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:35.790323   64287 cri.go:89] found id: ""
	I1009 20:21:35.790347   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.790357   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:35.790367   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:35.790380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:35.843721   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:35.843752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:35.858894   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:35.858915   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:35.934242   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:35.934261   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:35.934273   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:36.016029   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:36.016062   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.554219   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:38.567266   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:38.567339   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:38.606292   64287 cri.go:89] found id: ""
	I1009 20:21:38.606328   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.606338   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:38.606344   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:38.606396   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:38.638807   64287 cri.go:89] found id: ""
	I1009 20:21:38.638831   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.638841   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:38.638849   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:38.638907   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:38.677635   64287 cri.go:89] found id: ""
	I1009 20:21:38.677665   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.677674   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:38.677682   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:38.677740   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:38.714847   64287 cri.go:89] found id: ""
	I1009 20:21:38.714870   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.714878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:38.714886   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:38.714944   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:38.746460   64287 cri.go:89] found id: ""
	I1009 20:21:38.746487   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.746495   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:38.746501   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:38.746554   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:38.782027   64287 cri.go:89] found id: ""
	I1009 20:21:38.782055   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.782066   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:38.782073   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:38.782130   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:38.816859   64287 cri.go:89] found id: ""
	I1009 20:21:38.816885   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.816893   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:38.816899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:38.816961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:38.857159   64287 cri.go:89] found id: ""
	I1009 20:21:38.857195   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.857204   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:38.857212   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:38.857224   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:38.913209   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:38.913240   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:38.927593   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:38.927617   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:38.998178   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:38.998213   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:38.998226   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:39.080681   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:39.080716   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.882054   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.874981   64109 pod_ready.go:82] duration metric: took 4m0.000684397s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:40.875008   64109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:40.875024   64109 pod_ready.go:39] duration metric: took 4m13.532570346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:40.875056   64109 kubeadm.go:597] duration metric: took 4m22.188345085s to restartPrimaryControlPlane
	W1009 20:21:40.875130   64109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:40.875162   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:38.397606   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.398216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:42.398390   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:41.620092   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:41.633491   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:41.633564   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:41.671087   64287 cri.go:89] found id: ""
	I1009 20:21:41.671114   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.671123   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:41.671128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:41.671184   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:41.706940   64287 cri.go:89] found id: ""
	I1009 20:21:41.706966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.706976   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:41.706984   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:41.707036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:41.745612   64287 cri.go:89] found id: ""
	I1009 20:21:41.745637   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.745646   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:41.745651   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:41.745706   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:41.786857   64287 cri.go:89] found id: ""
	I1009 20:21:41.786884   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.786895   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:41.786904   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:41.786958   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:41.825005   64287 cri.go:89] found id: ""
	I1009 20:21:41.825030   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.825041   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:41.825053   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:41.825100   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:41.863089   64287 cri.go:89] found id: ""
	I1009 20:21:41.863111   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.863118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:41.863124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:41.863169   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:41.907937   64287 cri.go:89] found id: ""
	I1009 20:21:41.907966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.907980   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:41.907988   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:41.908047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:41.948189   64287 cri.go:89] found id: ""
	I1009 20:21:41.948219   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.948229   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:41.948243   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:41.948257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:41.993008   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:41.993038   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:42.045831   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:42.045864   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:42.060255   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:42.060280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:42.127657   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:42.127680   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:42.127696   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:44.398696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:46.399642   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:43.855161   63744 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.280119061s)
	I1009 20:21:43.855245   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:43.871587   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:43.881677   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:43.891625   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:43.891646   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:43.891689   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:43.901651   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:43.901705   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:43.911179   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:43.920389   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:43.920436   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:43.929812   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.938937   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:43.938989   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.948454   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:43.958881   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:43.958924   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:43.970036   63744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:44.024453   63744 kubeadm.go:310] W1009 20:21:44.000704    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.025829   63744 kubeadm.go:310] W1009 20:21:44.002227    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.142191   63744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:44.713209   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:44.725754   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:44.725825   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:44.760976   64287 cri.go:89] found id: ""
	I1009 20:21:44.760997   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.761004   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:44.761011   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:44.761053   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:44.796955   64287 cri.go:89] found id: ""
	I1009 20:21:44.796977   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.796985   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:44.796991   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:44.797036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:44.832558   64287 cri.go:89] found id: ""
	I1009 20:21:44.832590   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.832601   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:44.832608   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:44.832667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:44.867869   64287 cri.go:89] found id: ""
	I1009 20:21:44.867898   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.867908   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:44.867916   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:44.867966   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:44.901395   64287 cri.go:89] found id: ""
	I1009 20:21:44.901423   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.901434   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:44.901442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:44.901505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:44.939276   64287 cri.go:89] found id: ""
	I1009 20:21:44.939310   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.939323   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:44.939337   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:44.939399   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:44.973692   64287 cri.go:89] found id: ""
	I1009 20:21:44.973719   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.973728   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:44.973734   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:44.973782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:45.007406   64287 cri.go:89] found id: ""
	I1009 20:21:45.007436   64287 logs.go:282] 0 containers: []
	W1009 20:21:45.007446   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:45.007457   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:45.007472   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:45.062199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:45.062233   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:45.075739   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:45.075763   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:45.147623   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:45.147639   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:45.147654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:45.229252   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:45.229286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:47.777208   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:47.794054   64287 kubeadm.go:597] duration metric: took 4m2.743382732s to restartPrimaryControlPlane
	W1009 20:21:47.794132   64287 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:47.794159   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:48.789863   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:48.804981   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:48.815981   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:48.826318   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:48.826340   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:48.826390   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:48.838918   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:48.838976   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:48.851635   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:48.864173   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:48.864237   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:48.874606   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.885036   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:48.885097   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.894870   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:48.904993   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:48.905040   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:48.915393   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:49.145081   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:52.033314   63744 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:21:52.033383   63744 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:21:52.033489   63744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:21:52.033625   63744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:21:52.033705   63744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:21:52.033799   63744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:21:52.035555   63744 out.go:235]   - Generating certificates and keys ...
	I1009 20:21:52.035638   63744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:21:52.035737   63744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:21:52.035861   63744 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:21:52.035951   63744 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:21:52.036043   63744 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:21:52.036135   63744 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:21:52.036233   63744 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:21:52.036325   63744 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:21:52.036431   63744 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:21:52.036584   63744 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:21:52.036656   63744 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:21:52.036737   63744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:21:52.036831   63744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:21:52.036914   63744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:21:52.036985   63744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:21:52.037077   63744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:21:52.037157   63744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:21:52.037280   63744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:21:52.037372   63744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:21:52.038777   63744 out.go:235]   - Booting up control plane ...
	I1009 20:21:52.038872   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:21:52.038995   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:21:52.039101   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:21:52.039242   63744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:21:52.039338   63744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:21:52.039393   63744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:21:52.039593   63744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:21:52.039746   63744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:21:52.039813   63744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005827851s
	I1009 20:21:52.039917   63744 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:21:52.039996   63744 kubeadm.go:310] [api-check] The API server is healthy after 4.502512954s
	I1009 20:21:52.040127   63744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:21:52.040319   63744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:21:52.040402   63744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:21:52.040606   63744 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-503330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:21:52.040684   63744 kubeadm.go:310] [bootstrap-token] Using token: 69fwjj.t1glswhsta5w4zx2
	I1009 20:21:52.042352   63744 out.go:235]   - Configuring RBAC rules ...
	I1009 20:21:52.042456   63744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:21:52.042526   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:21:52.042664   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:21:52.042773   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:21:52.042868   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:21:52.042948   63744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:21:52.043119   63744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:21:52.043184   63744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:21:52.043250   63744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:21:52.043258   63744 kubeadm.go:310] 
	I1009 20:21:52.043360   63744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:21:52.043377   63744 kubeadm.go:310] 
	I1009 20:21:52.043504   63744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:21:52.043516   63744 kubeadm.go:310] 
	I1009 20:21:52.043554   63744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:21:52.043639   63744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:21:52.043711   63744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:21:52.043721   63744 kubeadm.go:310] 
	I1009 20:21:52.043792   63744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:21:52.043800   63744 kubeadm.go:310] 
	I1009 20:21:52.043838   63744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:21:52.043844   63744 kubeadm.go:310] 
	I1009 20:21:52.043909   63744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:21:52.044021   63744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:21:52.044108   63744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:21:52.044117   63744 kubeadm.go:310] 
	I1009 20:21:52.044225   63744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:21:52.044350   63744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:21:52.044365   63744 kubeadm.go:310] 
	I1009 20:21:52.044462   63744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044591   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:21:52.044619   63744 kubeadm.go:310] 	--control-plane 
	I1009 20:21:52.044624   63744 kubeadm.go:310] 
	I1009 20:21:52.044732   63744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:21:52.044739   63744 kubeadm.go:310] 
	I1009 20:21:52.044842   63744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044956   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:21:52.044967   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:21:52.044973   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:21:52.047342   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:21:48.899752   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:51.398734   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:52.048508   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:21:52.060338   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:21:52.079526   63744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:21:52.079580   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.079669   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-503330 minikube.k8s.io/updated_at=2024_10_09T20_21_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=embed-certs-503330 minikube.k8s.io/primary=true
	I1009 20:21:52.296281   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.296296   63744 ops.go:34] apiserver oom_adj: -16
	I1009 20:21:52.796429   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.296570   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.797269   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.297261   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.797049   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.297194   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.796896   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.296658   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.796494   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.904248   63744 kubeadm.go:1113] duration metric: took 4.824720684s to wait for elevateKubeSystemPrivileges
	I1009 20:21:56.904284   63744 kubeadm.go:394] duration metric: took 5m1.847540023s to StartCluster
	I1009 20:21:56.904302   63744 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.904390   63744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:21:56.906918   63744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.907263   63744 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:56.907349   63744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:56.907451   63744 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-503330"
	I1009 20:21:56.907487   63744 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-503330"
	I1009 20:21:56.907486   63744 addons.go:69] Setting default-storageclass=true in profile "embed-certs-503330"
	W1009 20:21:56.907496   63744 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:21:56.907502   63744 addons.go:69] Setting metrics-server=true in profile "embed-certs-503330"
	I1009 20:21:56.907527   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907540   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:21:56.907529   63744 addons.go:234] Setting addon metrics-server=true in "embed-certs-503330"
	W1009 20:21:56.907616   63744 addons.go:243] addon metrics-server should already be in state true
	I1009 20:21:56.907642   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907508   63744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-503330"
	I1009 20:21:56.907976   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908018   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908038   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908061   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908072   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908105   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.909166   63744 out.go:177] * Verifying Kubernetes components...
	I1009 20:21:56.910945   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:56.924607   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1009 20:21:56.925089   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.925624   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.925643   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.926009   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.926194   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.927999   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1009 20:21:56.928182   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1009 20:21:56.929496   63744 addons.go:234] Setting addon default-storageclass=true in "embed-certs-503330"
	W1009 20:21:56.929513   63744 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:21:56.929533   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.929779   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.929804   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.930111   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930148   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930590   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930607   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930727   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930742   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930950   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931022   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931541   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.931583   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.932246   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.932292   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.945160   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I1009 20:21:56.945657   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.946102   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.946128   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.946469   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.947002   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.947044   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.951951   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I1009 20:21:56.952409   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.952851   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1009 20:21:56.953051   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953068   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.953331   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.953407   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.953561   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.953830   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953854   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.954204   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.954381   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.956314   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.956515   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.958947   63744 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:21:56.959026   63744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:53.898455   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:55.898680   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:57.899675   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:56.961002   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:21:56.961019   63744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:21:56.961036   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.961188   63744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.961206   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:56.961219   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.964087   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1009 20:21:56.964490   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.964644   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965040   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965298   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965511   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965539   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965577   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965600   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965876   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.965901   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.965901   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965958   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966041   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966083   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.966324   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.967052   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.967288   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.968690   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.968865   63744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.968880   63744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:56.968902   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.971293   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971661   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.971682   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971807   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.971975   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.972115   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.972249   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:57.140847   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:57.160702   63744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172751   63744 node_ready.go:49] node "embed-certs-503330" has status "Ready":"True"
	I1009 20:21:57.172781   63744 node_ready.go:38] duration metric: took 12.05112ms for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172794   63744 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:57.181089   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:21:57.242001   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:57.263153   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:21:57.263173   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:21:57.302934   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:21:57.302962   63744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:21:57.335796   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.335822   63744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:21:57.361537   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.418449   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:57.903919   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.903945   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904232   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904252   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:57.904261   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.904269   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904289   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:57.904560   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904578   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131399   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131433   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131434   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131451   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131717   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131742   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131750   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131762   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131792   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131796   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131847   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131861   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131869   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131972   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131986   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133342   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.133353   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.133363   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133372   63744 addons.go:475] Verifying addon metrics-server=true in "embed-certs-503330"
	I1009 20:21:58.148066   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.148090   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.148302   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.148304   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.148331   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.149874   63744 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1009 20:21:58.151249   63744 addons.go:510] duration metric: took 1.243909023s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1009 20:22:00.398702   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:02.898157   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:59.187137   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:01.686294   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:03.687302   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:04.187813   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:04.187838   63744 pod_ready.go:82] duration metric: took 7.006724226s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:04.187847   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693964   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.693989   63744 pod_ready.go:82] duration metric: took 1.506136012s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693999   63744 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698244   63744 pod_ready.go:93] pod "etcd-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.698263   63744 pod_ready.go:82] duration metric: took 4.258915ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698272   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702503   63744 pod_ready.go:93] pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.702523   63744 pod_ready.go:82] duration metric: took 4.24469ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702534   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706794   63744 pod_ready.go:93] pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.706814   63744 pod_ready.go:82] duration metric: took 4.272023ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706824   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785041   63744 pod_ready.go:93] pod "kube-proxy-k4sqz" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.785063   63744 pod_ready.go:82] duration metric: took 78.232276ms for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785072   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185082   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:06.185107   63744 pod_ready.go:82] duration metric: took 400.026614ms for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185118   63744 pod_ready.go:39] duration metric: took 9.012311475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:06.185134   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:06.185190   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:06.200274   63744 api_server.go:72] duration metric: took 9.292974134s to wait for apiserver process to appear ...
	I1009 20:22:06.200300   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:06.200319   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:22:06.204606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:22:06.205489   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:06.205507   63744 api_server.go:131] duration metric: took 5.200899ms to wait for apiserver health ...
	I1009 20:22:06.205515   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:06.387526   63744 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:06.387560   63744 system_pods.go:61] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.387566   63744 system_pods.go:61] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.387569   63744 system_pods.go:61] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.387572   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.387576   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.387580   63744 system_pods.go:61] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.387584   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.387589   63744 system_pods.go:61] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.387595   63744 system_pods.go:61] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.387604   63744 system_pods.go:74] duration metric: took 182.083801ms to wait for pod list to return data ...
	I1009 20:22:06.387614   63744 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:06.585261   63744 default_sa.go:45] found service account: "default"
	I1009 20:22:06.585283   63744 default_sa.go:55] duration metric: took 197.662514ms for default service account to be created ...
	I1009 20:22:06.585292   63744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:06.788380   63744 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:06.788405   63744 system_pods.go:89] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.788410   63744 system_pods.go:89] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.788414   63744 system_pods.go:89] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.788418   63744 system_pods.go:89] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.788421   63744 system_pods.go:89] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.788425   63744 system_pods.go:89] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.788428   63744 system_pods.go:89] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.788433   63744 system_pods.go:89] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.788437   63744 system_pods.go:89] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.788445   63744 system_pods.go:126] duration metric: took 203.147541ms to wait for k8s-apps to be running ...
	I1009 20:22:06.788454   63744 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:06.788493   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:06.808681   63744 system_svc.go:56] duration metric: took 20.217422ms WaitForService to wait for kubelet
	I1009 20:22:06.808710   63744 kubeadm.go:582] duration metric: took 9.901411942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:06.808733   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:06.984902   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:06.984932   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:06.984945   63744 node_conditions.go:105] duration metric: took 176.206313ms to run NodePressure ...
	I1009 20:22:06.984958   63744 start.go:241] waiting for startup goroutines ...
	I1009 20:22:06.984968   63744 start.go:246] waiting for cluster config update ...
	I1009 20:22:06.984981   63744 start.go:255] writing updated cluster config ...
	I1009 20:22:06.985286   63744 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:07.038935   63744 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:07.040555   63744 out.go:177] * Done! kubectl is now configured to use "embed-certs-503330" cluster and "default" namespace by default
	I1009 20:22:07.095426   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.220236459s)
	I1009 20:22:07.095500   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:07.112458   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:22:07.126942   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:22:07.140284   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:22:07.140304   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:22:07.140349   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:22:07.150051   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:22:07.150089   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:22:07.159508   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:22:07.169670   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:22:07.169724   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:22:07.179378   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.189534   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:22:07.189590   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.198752   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:22:07.207878   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:22:07.207922   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:22:07.217131   64109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:22:07.272837   64109 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:22:07.272983   64109 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:22:07.390966   64109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:22:07.391157   64109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:22:07.391298   64109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:22:07.402064   64109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:22:07.404170   64109 out.go:235]   - Generating certificates and keys ...
	I1009 20:22:07.404277   64109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:22:07.404377   64109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:22:07.404500   64109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:22:07.404594   64109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:22:07.404709   64109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:22:07.404798   64109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:22:07.404891   64109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:22:07.404980   64109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:22:07.405087   64109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:22:07.405184   64109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:22:07.405257   64109 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:22:07.405339   64109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:22:04.898623   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:06.899217   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:07.573252   64109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:22:07.929073   64109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:22:08.151802   64109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:22:08.220927   64109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:22:08.351546   64109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:22:08.352048   64109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:22:08.354486   64109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:22:08.356298   64109 out.go:235]   - Booting up control plane ...
	I1009 20:22:08.356416   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:22:08.356497   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:22:08.356564   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:22:08.376381   64109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:22:08.383479   64109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:22:08.383861   64109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:22:08.515158   64109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:22:08.515282   64109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:22:09.516371   64109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001248976s
	I1009 20:22:09.516460   64109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:22:09.398667   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:11.898547   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:14.518560   64109 kubeadm.go:310] [api-check] The API server is healthy after 5.002267352s
	I1009 20:22:14.535812   64109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:22:14.551918   64109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:22:14.575035   64109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:22:14.575281   64109 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-733270 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:22:14.589604   64109 kubeadm.go:310] [bootstrap-token] Using token: q60nq5.9zsgiaeid5aito18
	I1009 20:22:14.590971   64109 out.go:235]   - Configuring RBAC rules ...
	I1009 20:22:14.591128   64109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:22:14.597327   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:22:14.605584   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:22:14.608650   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:22:14.614771   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:22:14.618089   64109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:22:14.929271   64109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:22:15.378546   64109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:22:15.929242   64109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:22:15.930222   64109 kubeadm.go:310] 
	I1009 20:22:15.930305   64109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:22:15.930314   64109 kubeadm.go:310] 
	I1009 20:22:15.930395   64109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:22:15.930423   64109 kubeadm.go:310] 
	I1009 20:22:15.930468   64109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:22:15.930569   64109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:22:15.930635   64109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:22:15.930643   64109 kubeadm.go:310] 
	I1009 20:22:15.930711   64109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:22:15.930718   64109 kubeadm.go:310] 
	I1009 20:22:15.930758   64109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:22:15.930764   64109 kubeadm.go:310] 
	I1009 20:22:15.930807   64109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:22:15.930874   64109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:22:15.930933   64109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:22:15.930939   64109 kubeadm.go:310] 
	I1009 20:22:15.931013   64109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:22:15.931138   64109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:22:15.931150   64109 kubeadm.go:310] 
	I1009 20:22:15.931258   64109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931411   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:22:15.931450   64109 kubeadm.go:310] 	--control-plane 
	I1009 20:22:15.931460   64109 kubeadm.go:310] 
	I1009 20:22:15.931560   64109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:22:15.931569   64109 kubeadm.go:310] 
	I1009 20:22:15.931668   64109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931824   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:22:15.933191   64109 kubeadm.go:310] W1009 20:22:07.220393    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933602   64109 kubeadm.go:310] W1009 20:22:07.223065    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933757   64109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:22:15.933786   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:22:15.933800   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:22:15.935449   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:22:15.936759   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:22:15.947648   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:22:15.966343   64109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:22:15.966422   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:15.966483   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-733270 minikube.k8s.io/updated_at=2024_10_09T20_22_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=default-k8s-diff-port-733270 minikube.k8s.io/primary=true
	I1009 20:22:16.186232   64109 ops.go:34] apiserver oom_adj: -16
	I1009 20:22:16.186379   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:16.686824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:17.187316   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:14.398119   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:16.399791   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:17.687381   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.186824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.687500   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.187331   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.687194   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.767575   64109 kubeadm.go:1113] duration metric: took 3.801217416s to wait for elevateKubeSystemPrivileges
	I1009 20:22:19.767611   64109 kubeadm.go:394] duration metric: took 5m1.132732036s to StartCluster
	I1009 20:22:19.767631   64109 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.767719   64109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:22:19.769461   64109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.769695   64109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:22:19.769758   64109 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:22:19.769856   64109 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769884   64109 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-733270"
	I1009 20:22:19.769881   64109 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769894   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:22:19.769908   64109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733270"
	W1009 20:22:19.769897   64109 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:22:19.769970   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.769892   64109 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.770056   64109 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.770069   64109 addons.go:243] addon metrics-server should already be in state true
	I1009 20:22:19.770116   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.770324   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770356   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770364   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770392   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770486   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770522   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.771540   64109 out.go:177] * Verifying Kubernetes components...
	I1009 20:22:19.772979   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:22:19.785692   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I1009 20:22:19.785792   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I1009 20:22:19.786095   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786204   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786608   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786629   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786759   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786776   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786948   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.787422   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.787449   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.787843   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.788015   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.788974   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34217
	I1009 20:22:19.789282   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.789751   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.789772   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.791379   64109 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.791400   64109 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:22:19.791428   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.791601   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.791796   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.791834   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.792113   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.792147   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.806661   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1009 20:22:19.807178   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1009 20:22:19.807283   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807700   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807966   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.807989   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808200   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.808223   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808407   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.808629   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808811   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.810504   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810671   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1009 20:22:19.811047   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.811579   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.811602   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.811962   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.812375   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.812404   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.812666   64109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:22:19.812673   64109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:22:19.814145   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:22:19.814160   64109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:22:19.814173   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.814293   64109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:19.814308   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:22:19.814324   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.817244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818718   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.818744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818881   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.818956   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819037   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819240   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.819401   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.819677   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.819697   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.819713   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819831   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819990   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.820176   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.831920   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1009 20:22:19.832278   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.832725   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.832757   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.833093   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.833271   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.834841   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.835042   64109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:19.835074   64109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:22:19.835094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.837916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.838651   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838759   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.838927   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.839075   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.839216   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.968622   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:22:19.988987   64109 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005886   64109 node_ready.go:49] node "default-k8s-diff-port-733270" has status "Ready":"True"
	I1009 20:22:20.005909   64109 node_ready.go:38] duration metric: took 16.891882ms for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005920   64109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:20.015076   64109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:20.072480   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:22:20.072517   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:22:20.089167   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:20.101256   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:20.128261   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:22:20.128310   64109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:22:20.166749   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.166772   64109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:22:20.250822   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.802064   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802142   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802449   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802462   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802465   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802471   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802479   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802482   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802490   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802503   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.804339   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804345   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804381   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.804403   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804413   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804426   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.820127   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.820148   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.820509   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.820526   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.820558   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.348946   64109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.098079149s)
	I1009 20:22:21.349009   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349024   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349347   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349396   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349404   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349420   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349428   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349689   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349748   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349774   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349788   64109 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-733270"
	I1009 20:22:21.351765   64109 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1009 20:22:21.352876   64109 addons.go:510] duration metric: took 1.58312679s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1009 20:22:22.021876   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:18.401861   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:20.899295   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:24.521853   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.021730   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:23.399283   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:25.897649   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.897899   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:28.021952   64109 pod_ready.go:93] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.021974   64109 pod_ready.go:82] duration metric: took 8.006873591s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.021983   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026148   64109 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.026167   64109 pod_ready.go:82] duration metric: took 4.178272ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026176   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029955   64109 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.029976   64109 pod_ready.go:82] duration metric: took 3.792606ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029986   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033674   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.033690   64109 pod_ready.go:82] duration metric: took 3.698391ms for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033697   64109 pod_ready.go:39] duration metric: took 8.027766695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:28.033709   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:28.033754   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:28.057802   64109 api_server.go:72] duration metric: took 8.288077751s to wait for apiserver process to appear ...
	I1009 20:22:28.057830   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:28.057850   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:22:28.069876   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:22:28.071652   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:28.071676   64109 api_server.go:131] duration metric: took 13.838153ms to wait for apiserver health ...
	I1009 20:22:28.071684   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:28.083482   64109 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:28.083504   64109 system_pods.go:61] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.083509   64109 system_pods.go:61] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.083513   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.083516   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.083520   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.083523   64109 system_pods.go:61] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.083526   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.083531   64109 system_pods.go:61] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.083535   64109 system_pods.go:61] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.083542   64109 system_pods.go:74] duration metric: took 11.853134ms to wait for pod list to return data ...
	I1009 20:22:28.083548   64109 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:28.086146   64109 default_sa.go:45] found service account: "default"
	I1009 20:22:28.086165   64109 default_sa.go:55] duration metric: took 2.611433ms for default service account to be created ...
	I1009 20:22:28.086173   64109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:28.223233   64109 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:28.223260   64109 system_pods.go:89] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.223266   64109 system_pods.go:89] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.223270   64109 system_pods.go:89] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.223274   64109 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.223278   64109 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.223281   64109 system_pods.go:89] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.223285   64109 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.223291   64109 system_pods.go:89] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.223295   64109 system_pods.go:89] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.223303   64109 system_pods.go:126] duration metric: took 137.124429ms to wait for k8s-apps to be running ...
	I1009 20:22:28.223310   64109 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:28.223352   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:28.239300   64109 system_svc.go:56] duration metric: took 15.983195ms WaitForService to wait for kubelet
	I1009 20:22:28.239324   64109 kubeadm.go:582] duration metric: took 8.469605426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:28.239341   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:28.419917   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:28.419940   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:28.419951   64109 node_conditions.go:105] duration metric: took 180.606696ms to run NodePressure ...
	I1009 20:22:28.419962   64109 start.go:241] waiting for startup goroutines ...
	I1009 20:22:28.419969   64109 start.go:246] waiting for cluster config update ...
	I1009 20:22:28.419978   64109 start.go:255] writing updated cluster config ...
	I1009 20:22:28.420224   64109 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:28.467253   64109 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:28.469239   64109 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-733270" cluster and "default" namespace by default
	I1009 20:22:29.898528   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:31.897863   63427 pod_ready.go:82] duration metric: took 4m0.005763954s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:22:31.897884   63427 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 20:22:31.897892   63427 pod_ready.go:39] duration metric: took 4m2.806165062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:31.897906   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:31.897930   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:31.897972   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:31.945643   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:31.945667   63427 cri.go:89] found id: ""
	I1009 20:22:31.945677   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:31.945720   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.949923   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:31.950018   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:31.989365   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:31.989391   63427 cri.go:89] found id: ""
	I1009 20:22:31.989401   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:31.989451   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.993865   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:31.993926   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:32.030658   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.030678   63427 cri.go:89] found id: ""
	I1009 20:22:32.030685   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:32.030731   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.034587   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:32.034647   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:32.078482   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.078508   63427 cri.go:89] found id: ""
	I1009 20:22:32.078516   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:32.078570   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.082565   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:32.082626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:32.118355   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.118379   63427 cri.go:89] found id: ""
	I1009 20:22:32.118388   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:32.118444   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.123110   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:32.123170   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:32.163052   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.163077   63427 cri.go:89] found id: ""
	I1009 20:22:32.163085   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:32.163137   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.167085   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:32.167146   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:32.201126   63427 cri.go:89] found id: ""
	I1009 20:22:32.201149   63427 logs.go:282] 0 containers: []
	W1009 20:22:32.201156   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:32.201161   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:32.201217   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:32.242235   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.242259   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.242265   63427 cri.go:89] found id: ""
	I1009 20:22:32.242274   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:32.242337   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.247127   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.250692   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:32.250712   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.301343   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:32.301368   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:32.347256   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:32.347283   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:32.485223   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:32.485263   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.530013   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:32.530054   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:32.580422   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:32.580447   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:32.625202   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:32.625237   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.664203   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:32.664230   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.701753   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:32.701782   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.741584   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:32.741610   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.779976   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:32.780003   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:32.848844   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:32.848875   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:32.871387   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:32.871416   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:35.836255   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:35.853510   63427 api_server.go:72] duration metric: took 4m14.501873287s to wait for apiserver process to appear ...
	I1009 20:22:35.853541   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:35.853583   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:35.853626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:35.889199   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:35.889228   63427 cri.go:89] found id: ""
	I1009 20:22:35.889237   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:35.889299   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.893644   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:35.893706   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:35.934151   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:35.934178   63427 cri.go:89] found id: ""
	I1009 20:22:35.934188   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:35.934244   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.938561   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:35.938618   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:35.974555   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:35.974579   63427 cri.go:89] found id: ""
	I1009 20:22:35.974588   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:35.974639   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.978468   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:35.978514   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:36.014292   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.014316   63427 cri.go:89] found id: ""
	I1009 20:22:36.014324   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:36.014366   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.018618   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:36.018672   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:36.059334   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.059366   63427 cri.go:89] found id: ""
	I1009 20:22:36.059377   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:36.059436   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.063552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:36.063612   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:36.098384   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.098404   63427 cri.go:89] found id: ""
	I1009 20:22:36.098413   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:36.098464   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.102428   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:36.102490   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:36.140422   63427 cri.go:89] found id: ""
	I1009 20:22:36.140451   63427 logs.go:282] 0 containers: []
	W1009 20:22:36.140461   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:36.140467   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:36.140524   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:36.178576   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.178600   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.178604   63427 cri.go:89] found id: ""
	I1009 20:22:36.178610   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:36.178662   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.183208   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.186971   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:36.186994   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.222365   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:36.222389   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:36.652499   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:36.652533   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:36.700493   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:36.700523   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:36.715630   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:36.715657   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:36.757738   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:36.757766   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:36.793469   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:36.793491   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.833374   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:36.833400   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.894545   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:36.894579   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.932407   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:36.932441   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.969165   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:36.969198   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:37.039100   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:37.039138   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:37.141855   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:37.141889   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.701118   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:22:39.705369   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:22:39.706731   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:39.706750   63427 api_server.go:131] duration metric: took 3.853202912s to wait for apiserver health ...
	I1009 20:22:39.706757   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:39.706777   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:39.706821   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:39.745203   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.745227   63427 cri.go:89] found id: ""
	I1009 20:22:39.745234   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:39.745277   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.749708   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:39.749768   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:39.786606   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:39.786629   63427 cri.go:89] found id: ""
	I1009 20:22:39.786637   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:39.786681   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.790981   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:39.791036   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:39.826615   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:39.826635   63427 cri.go:89] found id: ""
	I1009 20:22:39.826642   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:39.826710   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.831189   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:39.831260   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:39.867300   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:39.867320   63427 cri.go:89] found id: ""
	I1009 20:22:39.867327   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:39.867373   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.871552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:39.871606   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:39.905493   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:39.905513   63427 cri.go:89] found id: ""
	I1009 20:22:39.905521   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:39.905565   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.910653   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:39.910704   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:39.952830   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:39.952848   63427 cri.go:89] found id: ""
	I1009 20:22:39.952856   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:39.952901   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.957366   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:39.957434   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:39.993913   63427 cri.go:89] found id: ""
	I1009 20:22:39.993936   63427 logs.go:282] 0 containers: []
	W1009 20:22:39.993943   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:39.993949   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:39.993993   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:40.036654   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.036680   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.036685   63427 cri.go:89] found id: ""
	I1009 20:22:40.036694   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:40.036752   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.041168   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.045050   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:40.045073   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:40.059862   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:40.059890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:40.098698   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:40.098725   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:40.136003   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:40.136028   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:40.192473   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:40.192499   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.228548   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:40.228575   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:40.634922   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:40.634956   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:40.701278   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:40.701313   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:40.813881   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:40.813915   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:40.874590   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:40.874619   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:40.916558   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:40.916585   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:40.959294   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:40.959323   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.997037   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:40.997065   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:43.555901   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:43.555933   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.555941   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.555947   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.555953   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.555957   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.555962   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.555973   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.555982   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.555992   63427 system_pods.go:74] duration metric: took 3.849229039s to wait for pod list to return data ...
	I1009 20:22:43.556003   63427 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:43.558563   63427 default_sa.go:45] found service account: "default"
	I1009 20:22:43.558582   63427 default_sa.go:55] duration metric: took 2.571282ms for default service account to be created ...
	I1009 20:22:43.558590   63427 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:43.563017   63427 system_pods.go:86] 8 kube-system pods found
	I1009 20:22:43.563036   63427 system_pods.go:89] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.563041   63427 system_pods.go:89] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.563045   63427 system_pods.go:89] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.563049   63427 system_pods.go:89] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.563052   63427 system_pods.go:89] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.563056   63427 system_pods.go:89] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.563074   63427 system_pods.go:89] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.563082   63427 system_pods.go:89] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.563091   63427 system_pods.go:126] duration metric: took 4.493122ms to wait for k8s-apps to be running ...
	I1009 20:22:43.563101   63427 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:43.563148   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:43.579410   63427 system_svc.go:56] duration metric: took 16.301009ms WaitForService to wait for kubelet
	I1009 20:22:43.579435   63427 kubeadm.go:582] duration metric: took 4m22.227803615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:43.579456   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:43.582061   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:43.582083   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:43.582095   63427 node_conditions.go:105] duration metric: took 2.633714ms to run NodePressure ...
	I1009 20:22:43.582108   63427 start.go:241] waiting for startup goroutines ...
	I1009 20:22:43.582118   63427 start.go:246] waiting for cluster config update ...
	I1009 20:22:43.582137   63427 start.go:255] writing updated cluster config ...
	I1009 20:22:43.582415   63427 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:43.628249   63427 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:43.630230   63427 out.go:177] * Done! kubectl is now configured to use "no-preload-480205" cluster and "default" namespace by default
	I1009 20:23:45.402502   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:23:45.402618   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:23:45.404210   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:45.404308   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:45.404415   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:45.404554   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:45.404699   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:45.404776   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:45.406561   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:45.406656   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:45.406713   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:45.406832   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:45.406929   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:45.407025   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:45.407132   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:45.407247   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:45.407350   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:45.407466   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:45.407586   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:45.407659   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:45.407756   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:45.407850   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:45.407937   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:45.408016   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:45.408074   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:45.408202   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:45.408335   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:45.408407   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:45.408510   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:45.410040   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:45.410141   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:45.410231   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:45.410330   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:45.410409   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:45.410546   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:23:45.410589   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:23:45.410653   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.410810   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.410872   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411059   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411164   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411367   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411428   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411606   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411674   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411825   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411832   64287 kubeadm.go:310] 
	I1009 20:23:45.411865   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:23:45.411909   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:23:45.411928   64287 kubeadm.go:310] 
	I1009 20:23:45.411974   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:23:45.412018   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:23:45.412138   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:23:45.412155   64287 kubeadm.go:310] 
	I1009 20:23:45.412300   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:23:45.412344   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:23:45.412393   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:23:45.412400   64287 kubeadm.go:310] 
	I1009 20:23:45.412516   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:23:45.412618   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:23:45.412631   64287 kubeadm.go:310] 
	I1009 20:23:45.412764   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:23:45.412885   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:23:45.412996   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:23:45.413059   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:23:45.413078   64287 kubeadm.go:310] 
	W1009 20:23:45.413176   64287 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:23:45.413219   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:23:45.881931   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:23:45.897391   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:23:45.907598   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:23:45.907621   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:23:45.907668   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:23:45.917540   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:23:45.917585   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:23:45.927278   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:23:45.937054   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:23:45.937109   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:23:45.946544   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.956863   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:23:45.956901   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.966184   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:23:45.975335   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:23:45.975385   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:23:45.984552   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:23:46.063271   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:46.063380   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:46.213340   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:46.213511   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:46.213652   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:46.388334   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:46.390196   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:46.390303   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:46.390384   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:46.390499   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:46.390606   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:46.390710   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:46.390799   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:46.390899   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:46.390975   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:46.391097   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:46.391196   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:46.391268   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:46.391355   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:46.513116   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:46.906952   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:47.053715   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:47.184809   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:47.207139   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:47.208338   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:47.208424   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:47.362764   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:47.364703   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:47.364823   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:47.377925   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:47.379842   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:47.380533   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:47.382819   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:24:27.385438   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:24:27.385546   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:27.385726   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:32.386071   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:32.386268   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:42.386802   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:42.386979   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:02.388082   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:02.388300   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.388787   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:42.389021   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.389080   64287 kubeadm.go:310] 
	I1009 20:25:42.389329   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:25:42.389524   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:25:42.389545   64287 kubeadm.go:310] 
	I1009 20:25:42.389625   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:25:42.389680   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:25:42.389832   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:25:42.389846   64287 kubeadm.go:310] 
	I1009 20:25:42.389963   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:25:42.390019   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:25:42.390066   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:25:42.390081   64287 kubeadm.go:310] 
	I1009 20:25:42.390201   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:25:42.390312   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:25:42.390321   64287 kubeadm.go:310] 
	I1009 20:25:42.390438   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:25:42.390550   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:25:42.390671   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:25:42.390779   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:25:42.390791   64287 kubeadm.go:310] 
	I1009 20:25:42.391382   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:25:42.391507   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:25:42.391606   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:25:42.391673   64287 kubeadm.go:394] duration metric: took 7m57.392748571s to StartCluster
	I1009 20:25:42.391719   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:25:42.391785   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:25:42.439581   64287 cri.go:89] found id: ""
	I1009 20:25:42.439610   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.439621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:25:42.439628   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:25:42.439695   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:25:42.476205   64287 cri.go:89] found id: ""
	I1009 20:25:42.476231   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.476238   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:25:42.476243   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:25:42.476297   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:25:42.528317   64287 cri.go:89] found id: ""
	I1009 20:25:42.528342   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.528350   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:25:42.528356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:25:42.528413   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:25:42.564857   64287 cri.go:89] found id: ""
	I1009 20:25:42.564885   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.564893   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:25:42.564899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:25:42.564956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:25:42.600053   64287 cri.go:89] found id: ""
	I1009 20:25:42.600081   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.600088   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:25:42.600094   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:25:42.600146   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:25:42.636997   64287 cri.go:89] found id: ""
	I1009 20:25:42.637026   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.637034   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:25:42.637047   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:25:42.637107   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:25:42.672228   64287 cri.go:89] found id: ""
	I1009 20:25:42.672255   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.672266   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:25:42.672273   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:25:42.672331   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:25:42.711696   64287 cri.go:89] found id: ""
	I1009 20:25:42.711727   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.711737   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:25:42.711749   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:25:42.711764   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:25:42.764839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:25:42.764876   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:25:42.778484   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:25:42.778512   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:25:42.864830   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:25:42.864859   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:25:42.864874   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:25:42.975355   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:25:42.975389   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:25:43.015247   64287 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:25:43.015307   64287 out.go:270] * 
	W1009 20:25:43.015375   64287 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.015392   64287 out.go:270] * 
	W1009 20:25:43.016664   64287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:25:43.020135   64287 out.go:201] 
	W1009 20:25:43.021388   64287 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.021427   64287 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:25:43.021453   64287 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:25:43.022804   64287 out.go:201] 
	
	
	==> CRI-O <==
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.624667417Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505905624644988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f3fa052-5dd2-4804-b618-5c6ea08cef6a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.625050069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d5c7d535-fe7e-45a6-bad0-596ac7a779d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.625099843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d5c7d535-fe7e-45a6-bad0-596ac7a779d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.625337558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505130324851517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e8775,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54de70fedf7d5dd6b6eee66a7c7202866a6040e4e72e115c0857ceb44a00274f,PodSandboxId:02ea45abe18098c3f4a94231fded474c9df9c3822dcb8b405e18885e7848a6a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728505110248694331,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d8238b-b1d8-4770-90d6-27087a4a95b5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d,PodSandboxId:17ecff4f59d2daa859fd9ce97431fe783b9619aef8084cfd6b03a255674e2e82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505107151340068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dddm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284ba3c4-0972-40f4-97d9-6ed9ce09feac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915,PodSandboxId:0c1182fc5dd45f75643b6184a2e1398f0c3767ffc8065525238f8326f5d0d59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728505099488615575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vbpbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf61f4e-0d31-4712-9d
3e-7baa113b31d9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728505099459572166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e87
75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da,PodSandboxId:88c3e467b83bfc51f846e59c0cc61f7784c4806e4ae4323f9a757d5670bc5b82,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505094827040214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6630093b442a7ab96b2fbc070f5e39b1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2,PodSandboxId:2a1fe5cfb209d88a409e63634ae3cecf3c024c1095e0720a1f1ec10f0627132c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505094855362794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f67b234fae8b4cd32029941a4f99b6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4,PodSandboxId:2eac684995236ac5d70c8dff22d7065540b443855653142ad0fee9555e195369,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505094786042459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adad3fbe07057bd70072eae79db81b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783,PodSandboxId:c8613ecf9b51b18f633cf29d1a4a3869e2c8cb5c901f99e653adfb63740fa9c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505094737379329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed7d412dd3223d26543c0d27afdd758,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d5c7d535-fe7e-45a6-bad0-596ac7a779d8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.666092024Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=55acf29b-210e-4665-a865-83e34460d160 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.666205449Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=55acf29b-210e-4665-a865-83e34460d160 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.667135031Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=037ecc76-a88a-45e3-8246-51cea89567bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.667681561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505905667654276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=037ecc76-a88a-45e3-8246-51cea89567bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.668550481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc68f10e-fef1-40d4-b208-f90debd5da81 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.668606121Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc68f10e-fef1-40d4-b208-f90debd5da81 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.668796313Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505130324851517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e8775,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54de70fedf7d5dd6b6eee66a7c7202866a6040e4e72e115c0857ceb44a00274f,PodSandboxId:02ea45abe18098c3f4a94231fded474c9df9c3822dcb8b405e18885e7848a6a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728505110248694331,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d8238b-b1d8-4770-90d6-27087a4a95b5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d,PodSandboxId:17ecff4f59d2daa859fd9ce97431fe783b9619aef8084cfd6b03a255674e2e82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505107151340068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dddm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284ba3c4-0972-40f4-97d9-6ed9ce09feac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915,PodSandboxId:0c1182fc5dd45f75643b6184a2e1398f0c3767ffc8065525238f8326f5d0d59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728505099488615575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vbpbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf61f4e-0d31-4712-9d
3e-7baa113b31d9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728505099459572166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e87
75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da,PodSandboxId:88c3e467b83bfc51f846e59c0cc61f7784c4806e4ae4323f9a757d5670bc5b82,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505094827040214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6630093b442a7ab96b2fbc070f5e39b1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2,PodSandboxId:2a1fe5cfb209d88a409e63634ae3cecf3c024c1095e0720a1f1ec10f0627132c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505094855362794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f67b234fae8b4cd32029941a4f99b6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4,PodSandboxId:2eac684995236ac5d70c8dff22d7065540b443855653142ad0fee9555e195369,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505094786042459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adad3fbe07057bd70072eae79db81b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783,PodSandboxId:c8613ecf9b51b18f633cf29d1a4a3869e2c8cb5c901f99e653adfb63740fa9c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505094737379329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed7d412dd3223d26543c0d27afdd758,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc68f10e-fef1-40d4-b208-f90debd5da81 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.707071972Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d1d8fc8-25bc-4e14-8498-fe4184a239fd name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.707207703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d1d8fc8-25bc-4e14-8498-fe4184a239fd name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.709682528Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8d109652-c489-4ef5-8657-34d49bd97993 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.710064372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505905710041678,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8d109652-c489-4ef5-8657-34d49bd97993 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.710790423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=88787161-2171-42f5-aefd-a54886f57c2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.710870964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=88787161-2171-42f5-aefd-a54886f57c2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.711140461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505130324851517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e8775,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54de70fedf7d5dd6b6eee66a7c7202866a6040e4e72e115c0857ceb44a00274f,PodSandboxId:02ea45abe18098c3f4a94231fded474c9df9c3822dcb8b405e18885e7848a6a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728505110248694331,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d8238b-b1d8-4770-90d6-27087a4a95b5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d,PodSandboxId:17ecff4f59d2daa859fd9ce97431fe783b9619aef8084cfd6b03a255674e2e82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505107151340068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dddm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284ba3c4-0972-40f4-97d9-6ed9ce09feac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915,PodSandboxId:0c1182fc5dd45f75643b6184a2e1398f0c3767ffc8065525238f8326f5d0d59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728505099488615575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vbpbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf61f4e-0d31-4712-9d
3e-7baa113b31d9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728505099459572166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e87
75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da,PodSandboxId:88c3e467b83bfc51f846e59c0cc61f7784c4806e4ae4323f9a757d5670bc5b82,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505094827040214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6630093b442a7ab96b2fbc070f5e39b1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2,PodSandboxId:2a1fe5cfb209d88a409e63634ae3cecf3c024c1095e0720a1f1ec10f0627132c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505094855362794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f67b234fae8b4cd32029941a4f99b6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4,PodSandboxId:2eac684995236ac5d70c8dff22d7065540b443855653142ad0fee9555e195369,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505094786042459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adad3fbe07057bd70072eae79db81b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783,PodSandboxId:c8613ecf9b51b18f633cf29d1a4a3869e2c8cb5c901f99e653adfb63740fa9c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505094737379329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed7d412dd3223d26543c0d27afdd758,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=88787161-2171-42f5-aefd-a54886f57c2c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.745569495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7b35543-0ddc-495d-b4ad-46bff7adf641 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.745646254Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7b35543-0ddc-495d-b4ad-46bff7adf641 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.746933290Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58a2caac-a8a1-4a8d-bf65-4e001adad71f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.747336772Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505905747312886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58a2caac-a8a1-4a8d-bf65-4e001adad71f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.748074367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d855a07-54f7-447b-a194-1c95faf029cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.748127961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d855a07-54f7-447b-a194-1c95faf029cd name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:31:45 no-preload-480205 crio[701]: time="2024-10-09 20:31:45.748368286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505130324851517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e8775,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54de70fedf7d5dd6b6eee66a7c7202866a6040e4e72e115c0857ceb44a00274f,PodSandboxId:02ea45abe18098c3f4a94231fded474c9df9c3822dcb8b405e18885e7848a6a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728505110248694331,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d8238b-b1d8-4770-90d6-27087a4a95b5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d,PodSandboxId:17ecff4f59d2daa859fd9ce97431fe783b9619aef8084cfd6b03a255674e2e82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505107151340068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dddm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284ba3c4-0972-40f4-97d9-6ed9ce09feac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915,PodSandboxId:0c1182fc5dd45f75643b6184a2e1398f0c3767ffc8065525238f8326f5d0d59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728505099488615575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vbpbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf61f4e-0d31-4712-9d
3e-7baa113b31d9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728505099459572166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e87
75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da,PodSandboxId:88c3e467b83bfc51f846e59c0cc61f7784c4806e4ae4323f9a757d5670bc5b82,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505094827040214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6630093b442a7ab96b2fbc070f5e39b1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2,PodSandboxId:2a1fe5cfb209d88a409e63634ae3cecf3c024c1095e0720a1f1ec10f0627132c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505094855362794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f67b234fae8b4cd32029941a4f99b6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4,PodSandboxId:2eac684995236ac5d70c8dff22d7065540b443855653142ad0fee9555e195369,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505094786042459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adad3fbe07057bd70072eae79db81b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783,PodSandboxId:c8613ecf9b51b18f633cf29d1a4a3869e2c8cb5c901f99e653adfb63740fa9c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505094737379329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed7d412dd3223d26543c0d27afdd758,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d855a07-54f7-447b-a194-1c95faf029cd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a672e8a67e92b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   2e806445f254d       storage-provisioner
	54de70fedf7d5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   02ea45abe1809       busybox
	3f0da5a79567c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   17ecff4f59d2d       coredns-7c65d6cfc9-dddm2
	355de783599f2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   0c1182fc5dd45       kube-proxy-vbpbk
	8a3298f9f8701       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   2e806445f254d       storage-provisioner
	c6154b0051dbc       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   2a1fe5cfb209d       kube-scheduler-no-preload-480205
	9c72eddc31372       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   88c3e467b83bf       etcd-no-preload-480205
	42cddfd08cd98       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   2eac684995236       kube-apiserver-no-preload-480205
	71cf38b8d4096       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   c8613ecf9b51b       kube-controller-manager-no-preload-480205
	
	
	==> coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56065 - 64301 "HINFO IN 4263640063345838452.4491728043591611086. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014689769s
	
	
	==> describe nodes <==
	Name:               no-preload-480205
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-480205
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=no-preload-480205
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T20_08_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:08:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-480205
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:31:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:29:01 +0000   Wed, 09 Oct 2024 20:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:29:01 +0000   Wed, 09 Oct 2024 20:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:29:01 +0000   Wed, 09 Oct 2024 20:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:29:01 +0000   Wed, 09 Oct 2024 20:18:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    no-preload-480205
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a2a815b36f34b10b1151cb9dfac50a7
	  System UUID:                0a2a815b-36f3-4b10-b115-1cb9dfac50a7
	  Boot ID:                    396a835f-b5b1-42f2-a666-2021b9d852ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-dddm2                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-480205                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-480205             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-480205    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-vbpbk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-480205             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-fhcfl              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-480205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-480205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-480205 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-480205 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-480205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-480205 status is now: NodeHasSufficientPID
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeReady                22m                kubelet          Node no-preload-480205 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-480205 event: Registered Node no-preload-480205 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-480205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-480205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-480205 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-480205 event: Registered Node no-preload-480205 in Controller
	
	
	==> dmesg <==
	[Oct 9 20:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053569] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.203410] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.574137] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.593628] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.204617] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.064043] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080670] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.188662] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.114767] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.273346] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[Oct 9 20:18] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.063207] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.818456] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +5.297709] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.315153] systemd-fstab-generator[1974]: Ignoring "noauto" option for root device
	[  +3.724427] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.142460] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] <==
	{"level":"info","ts":"2024-10-09T20:18:15.464389Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:18:15.471588Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-09T20:18:15.473503Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"95e2e907d4f1ad16","initial-advertise-peer-urls":["https://192.168.39.162:2380"],"listen-peer-urls":["https://192.168.39.162:2380"],"advertise-client-urls":["https://192.168.39.162:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.162:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-09T20:18:15.473685Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-09T20:18:15.473289Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.162:2380"}
	{"level":"info","ts":"2024-10-09T20:18:15.476904Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.162:2380"}
	{"level":"info","ts":"2024-10-09T20:18:17.004259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-09T20:18:17.004321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-09T20:18:17.004342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 received MsgPreVoteResp from 95e2e907d4f1ad16 at term 2"}
	{"level":"info","ts":"2024-10-09T20:18:17.004360Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-09T20:18:17.004366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 received MsgVoteResp from 95e2e907d4f1ad16 at term 3"}
	{"level":"info","ts":"2024-10-09T20:18:17.004377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"95e2e907d4f1ad16 became leader at term 3"}
	{"level":"info","ts":"2024-10-09T20:18:17.004407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 95e2e907d4f1ad16 elected leader 95e2e907d4f1ad16 at term 3"}
	{"level":"info","ts":"2024-10-09T20:18:17.019662Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"95e2e907d4f1ad16","local-member-attributes":"{Name:no-preload-480205 ClientURLs:[https://192.168.39.162:2379]}","request-path":"/0/members/95e2e907d4f1ad16/attributes","cluster-id":"da8895e0fc3a6493","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:18:17.019688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:18:17.019964Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:18:17.019992Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:18:17.019669Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:18:17.021010Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:18:17.021072Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:18:17.021995Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T20:18:17.022043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.162:2379"}
	{"level":"info","ts":"2024-10-09T20:28:17.053843Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2024-10-09T20:28:17.065072Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":860,"took":"10.791545ms","hash":4032950074,"current-db-size-bytes":2895872,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2895872,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-10-09T20:28:17.065116Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4032950074,"revision":860,"compact-revision":-1}
	
	
	==> kernel <==
	 20:31:46 up 14 min,  0 users,  load average: 0.37, 0.17, 0.16
	Linux no-preload-480205 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] <==
	W1009 20:28:19.292544       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:28:19.292674       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:28:19.293862       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:28:19.293900       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:29:19.294254       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:29:19.294397       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1009 20:29:19.294255       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:29:19.294449       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 20:29:19.295648       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:29:19.295699       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:31:19.296861       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:31:19.296988       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1009 20:31:19.297033       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:31:19.297044       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 20:31:19.298130       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:31:19.298233       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] <==
	E1009 20:26:21.947392       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:26:22.443390       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:26:51.954008       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:26:52.450490       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:27:21.960967       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:27:22.458291       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:27:51.968010       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:27:52.465688       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:28:21.974624       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:28:22.472964       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:28:51.980564       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:28:52.480135       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:29:01.518489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-480205"
	E1009 20:29:21.987661       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:29:22.487783       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:29:36.146013       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="319.472µs"
	I1009 20:29:50.135127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="168.478µs"
	E1009 20:29:51.994438       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:29:52.495140       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:30:22.001468       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:30:22.501925       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:30:52.008316       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:30:52.510867       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:31:22.015410       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:31:22.519280       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:18:19.729939       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:18:19.740129       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.162"]
	E1009 20:18:19.741593       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:18:19.813360       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:18:19.813400       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:18:19.813428       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:18:19.820550       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:18:19.821534       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:18:19.821798       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:19.823648       1 config.go:199] "Starting service config controller"
	I1009 20:18:19.823750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:18:19.823854       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:18:19.823876       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:18:19.824846       1 config.go:328] "Starting node config controller"
	I1009 20:18:19.824885       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:18:19.924383       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 20:18:19.924493       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:18:19.925097       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] <==
	I1009 20:18:15.873122       1 serving.go:386] Generated self-signed cert in-memory
	W1009 20:18:18.259653       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 20:18:18.260020       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 20:18:18.260077       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 20:18:18.260103       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 20:18:18.288922       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1009 20:18:18.288999       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:18.291328       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:18:18.294218       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 20:18:18.294921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1009 20:18:18.295650       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:18:18.394898       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:30:34 no-preload-480205 kubelet[1355]: E1009 20:30:34.244470    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505834243939775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:43 no-preload-480205 kubelet[1355]: E1009 20:30:43.120363    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:30:44 no-preload-480205 kubelet[1355]: E1009 20:30:44.246063    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505844245701155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:44 no-preload-480205 kubelet[1355]: E1009 20:30:44.246113    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505844245701155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:54 no-preload-480205 kubelet[1355]: E1009 20:30:54.247786    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505854247473275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:54 no-preload-480205 kubelet[1355]: E1009 20:30:54.247833    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505854247473275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:30:57 no-preload-480205 kubelet[1355]: E1009 20:30:57.119978    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:31:04 no-preload-480205 kubelet[1355]: E1009 20:31:04.249248    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505864248764883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:04 no-preload-480205 kubelet[1355]: E1009 20:31:04.249774    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505864248764883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:08 no-preload-480205 kubelet[1355]: E1009 20:31:08.123417    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:31:14 no-preload-480205 kubelet[1355]: E1009 20:31:14.141444    1355 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 20:31:14 no-preload-480205 kubelet[1355]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 20:31:14 no-preload-480205 kubelet[1355]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 20:31:14 no-preload-480205 kubelet[1355]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 20:31:14 no-preload-480205 kubelet[1355]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 20:31:14 no-preload-480205 kubelet[1355]: E1009 20:31:14.250851    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505874250668289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:14 no-preload-480205 kubelet[1355]: E1009 20:31:14.250871    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505874250668289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:23 no-preload-480205 kubelet[1355]: E1009 20:31:23.120240    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:31:24 no-preload-480205 kubelet[1355]: E1009 20:31:24.252087    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505884251844811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:24 no-preload-480205 kubelet[1355]: E1009 20:31:24.252109    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505884251844811,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:34 no-preload-480205 kubelet[1355]: E1009 20:31:34.253804    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505894253144890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:34 no-preload-480205 kubelet[1355]: E1009 20:31:34.254210    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505894253144890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:38 no-preload-480205 kubelet[1355]: E1009 20:31:38.121092    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:31:44 no-preload-480205 kubelet[1355]: E1009 20:31:44.256921    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505904256569896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:31:44 no-preload-480205 kubelet[1355]: E1009 20:31:44.256947    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728505904256569896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] <==
	I1009 20:18:19.601675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:18:49.606924       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] <==
	I1009 20:18:50.408919       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:18:50.417291       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:18:50.417412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:19:07.819420       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:19:07.819813       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-480205_971a7ee3-29ba-41f6-a843-cee29f839171!
	I1009 20:19:07.820046       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad750259-34f8-489e-aa79-f6194ad4f0c3", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-480205_971a7ee3-29ba-41f6-a843-cee29f839171 became leader
	I1009 20:19:07.920435       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-480205_971a7ee3-29ba-41f6-a843-cee29f839171!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480205 -n no-preload-480205
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-480205 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fhcfl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-480205 describe pod metrics-server-6867b74b74-fhcfl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-480205 describe pod metrics-server-6867b74b74-fhcfl: exit status 1 (62.736979ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fhcfl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-480205 describe pod metrics-server-6867b74b74-fhcfl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1009 20:29:51.613457   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:29:51.909291   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1009 20:32:54.986351   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 2 (227.674017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-169021" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 2 (216.889984ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-169021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-169021 logs -n 25: (1.505310201s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-790037                           | kubernetes-upgrade-790037    | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:07 UTC |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-615869 sudo                            | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-615869                                 | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:08 UTC |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-480205             | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-324052 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | disable-driver-mounts-324052                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:10 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503330            | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC | 09 Oct 24 20:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-733270  | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-480205                  | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169021        | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-503330                 | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-733270       | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:22 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169021             | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:13:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:13:44.614940   64287 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:13:44.615052   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615076   64287 out.go:358] Setting ErrFile to fd 2...
	I1009 20:13:44.615081   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615239   64287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:13:44.615728   64287 out.go:352] Setting JSON to false
	I1009 20:13:44.616598   64287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6966,"bootTime":1728497859,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:13:44.616678   64287 start.go:139] virtualization: kvm guest
	I1009 20:13:44.618709   64287 out.go:177] * [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:13:44.619813   64287 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:13:44.619841   64287 notify.go:220] Checking for updates...
	I1009 20:13:44.621876   64287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:13:44.623226   64287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:13:44.624576   64287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:13:44.625863   64287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:13:44.627027   64287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:13:44.628559   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:13:44.628948   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.629014   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.644138   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I1009 20:13:44.644537   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.645045   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.645067   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.645380   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.645557   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.647115   64287 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 20:13:44.648228   64287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:13:44.648491   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.648529   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.663211   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I1009 20:13:44.663674   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.664164   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.664192   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.664482   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.664648   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.697395   64287 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:13:44.698580   64287 start.go:297] selected driver: kvm2
	I1009 20:13:44.698591   64287 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.698719   64287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:13:44.699437   64287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.699521   64287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:13:44.713190   64287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:13:44.713567   64287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:13:44.713600   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:13:44.713640   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:13:44.713673   64287 start.go:340] cluster config:
	{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.713805   64287 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.716209   64287 out.go:177] * Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	I1009 20:13:44.717364   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:13:44.717399   64287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:13:44.717409   64287 cache.go:56] Caching tarball of preloaded images
	I1009 20:13:44.717485   64287 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:13:44.717495   64287 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:13:44.717594   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:13:44.717753   64287 start.go:360] acquireMachinesLock for old-k8s-version-169021: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:13:48.943307   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:52.015296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:58.095330   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:01.167322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:07.247325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:10.323296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:16.399318   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:19.471371   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:25.551279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:28.623322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:34.703301   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:37.775281   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:43.855344   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:46.927300   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:53.007389   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:56.079332   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:02.159290   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:05.231351   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:11.311339   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:14.383289   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:20.463287   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:23.535402   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:29.615312   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:32.687319   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:38.767323   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:41.839306   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:47.919325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:50.991292   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:57.071390   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:00.143404   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:06.223291   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:09.295298   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:15.375349   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:18.447271   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:24.527327   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:27.599279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:30.604005   63744 start.go:364] duration metric: took 3m52.142985964s to acquireMachinesLock for "embed-certs-503330"
	I1009 20:16:30.604068   63744 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:30.604076   63744 fix.go:54] fixHost starting: 
	I1009 20:16:30.604520   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:30.604571   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:30.620743   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I1009 20:16:30.621433   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:30.621936   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:16:30.621961   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:30.622323   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:30.622490   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:30.622654   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:16:30.624257   63744 fix.go:112] recreateIfNeeded on embed-certs-503330: state=Stopped err=<nil>
	I1009 20:16:30.624295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	W1009 20:16:30.624542   63744 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:30.627103   63744 out.go:177] * Restarting existing kvm2 VM for "embed-certs-503330" ...
	I1009 20:16:30.601719   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:30.601759   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602048   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:16:30.602078   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602263   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:16:30.603862   63427 machine.go:96] duration metric: took 4m37.428982059s to provisionDockerMachine
	I1009 20:16:30.603905   63427 fix.go:56] duration metric: took 4m37.449834405s for fixHost
	I1009 20:16:30.603915   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 4m37.449856097s
	W1009 20:16:30.603942   63427 start.go:714] error starting host: provision: host is not running
	W1009 20:16:30.604043   63427 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1009 20:16:30.604052   63427 start.go:729] Will try again in 5 seconds ...
	I1009 20:16:30.628558   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Start
	I1009 20:16:30.628718   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring networks are active...
	I1009 20:16:30.629440   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network default is active
	I1009 20:16:30.629760   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network mk-embed-certs-503330 is active
	I1009 20:16:30.630197   63744 main.go:141] libmachine: (embed-certs-503330) Getting domain xml...
	I1009 20:16:30.630952   63744 main.go:141] libmachine: (embed-certs-503330) Creating domain...
	I1009 20:16:31.808982   63744 main.go:141] libmachine: (embed-certs-503330) Waiting to get IP...
	I1009 20:16:31.809856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:31.810317   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:31.810463   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:31.810307   64895 retry.go:31] will retry after 287.246953ms: waiting for machine to come up
	I1009 20:16:32.098815   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.099474   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.099513   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.099422   64895 retry.go:31] will retry after 323.155152ms: waiting for machine to come up
	I1009 20:16:32.424145   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.424618   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.424646   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.424576   64895 retry.go:31] will retry after 410.947245ms: waiting for machine to come up
	I1009 20:16:32.837351   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.837773   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.837823   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.837735   64895 retry.go:31] will retry after 562.56411ms: waiting for machine to come up
	I1009 20:16:35.605597   63427 start.go:360] acquireMachinesLock for no-preload-480205: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:16:33.401377   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.401828   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.401877   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.401781   64895 retry.go:31] will retry after 460.104327ms: waiting for machine to come up
	I1009 20:16:33.863457   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.863854   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.863880   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.863815   64895 retry.go:31] will retry after 668.516186ms: waiting for machine to come up
	I1009 20:16:34.533619   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:34.534019   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:34.534054   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:34.533954   64895 retry.go:31] will retry after 966.757544ms: waiting for machine to come up
	I1009 20:16:35.501805   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:35.502178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:35.502200   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:35.502137   64895 retry.go:31] will retry after 1.017669155s: waiting for machine to come up
	I1009 20:16:36.521729   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:36.522150   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:36.522178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:36.522115   64895 retry.go:31] will retry after 1.292799206s: waiting for machine to come up
	I1009 20:16:37.816782   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:37.817187   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:37.817207   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:37.817156   64895 retry.go:31] will retry after 2.202935241s: waiting for machine to come up
	I1009 20:16:40.022666   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:40.023072   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:40.023101   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:40.023030   64895 retry.go:31] will retry after 2.360885318s: waiting for machine to come up
	I1009 20:16:42.385530   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:42.385947   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:42.385976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:42.385909   64895 retry.go:31] will retry after 2.1999082s: waiting for machine to come up
	I1009 20:16:44.588258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:44.588617   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:44.588649   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:44.588581   64895 retry.go:31] will retry after 3.345984614s: waiting for machine to come up
	I1009 20:16:47.937287   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937758   63744 main.go:141] libmachine: (embed-certs-503330) Found IP for machine: 192.168.50.97
	I1009 20:16:47.937785   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has current primary IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937790   63744 main.go:141] libmachine: (embed-certs-503330) Reserving static IP address...
	I1009 20:16:47.938195   63744 main.go:141] libmachine: (embed-certs-503330) Reserved static IP address: 192.168.50.97
	I1009 20:16:47.938231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.938241   63744 main.go:141] libmachine: (embed-certs-503330) Waiting for SSH to be available...
	I1009 20:16:47.938266   63744 main.go:141] libmachine: (embed-certs-503330) DBG | skip adding static IP to network mk-embed-certs-503330 - found existing host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"}
	I1009 20:16:47.938279   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Getting to WaitForSSH function...
	I1009 20:16:47.940214   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940468   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.940499   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940570   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH client type: external
	I1009 20:16:47.940605   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa (-rw-------)
	I1009 20:16:47.940639   63744 main.go:141] libmachine: (embed-certs-503330) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:16:47.940654   63744 main.go:141] libmachine: (embed-certs-503330) DBG | About to run SSH command:
	I1009 20:16:47.940660   63744 main.go:141] libmachine: (embed-certs-503330) DBG | exit 0
	I1009 20:16:48.066973   63744 main.go:141] libmachine: (embed-certs-503330) DBG | SSH cmd err, output: <nil>: 
	I1009 20:16:48.067404   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetConfigRaw
	I1009 20:16:48.068009   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.070587   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.070969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.070998   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.071241   63744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/config.json ...
	I1009 20:16:48.071426   63744 machine.go:93] provisionDockerMachine start ...
	I1009 20:16:48.071443   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:48.071655   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.074102   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.074448   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074560   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.074721   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074872   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074989   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.075156   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.075346   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.075358   63744 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:16:48.187275   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:16:48.187302   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187600   63744 buildroot.go:166] provisioning hostname "embed-certs-503330"
	I1009 20:16:48.187624   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187763   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.190220   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190585   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.190606   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190736   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.190932   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191110   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191251   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.191400   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.191608   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.191629   63744 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-503330 && echo "embed-certs-503330" | sudo tee /etc/hostname
	I1009 20:16:48.321932   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-503330
	
	I1009 20:16:48.321961   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.324976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.325393   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325542   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.325720   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.325856   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.326024   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.326360   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.326546   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.326570   63744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-503330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-503330/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-503330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:16:49.299713   64109 start.go:364] duration metric: took 3m11.699715872s to acquireMachinesLock for "default-k8s-diff-port-733270"
	I1009 20:16:49.299779   64109 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:49.299788   64109 fix.go:54] fixHost starting: 
	I1009 20:16:49.300158   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:49.300205   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:49.319769   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1009 20:16:49.320201   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:49.320678   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:16:49.320704   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:49.321107   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:49.321301   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:16:49.321463   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:16:49.322908   64109 fix.go:112] recreateIfNeeded on default-k8s-diff-port-733270: state=Stopped err=<nil>
	I1009 20:16:49.322943   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	W1009 20:16:49.323098   64109 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:49.324952   64109 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-733270" ...
	I1009 20:16:48.448176   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:48.448210   63744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:16:48.448243   63744 buildroot.go:174] setting up certificates
	I1009 20:16:48.448254   63744 provision.go:84] configureAuth start
	I1009 20:16:48.448267   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.448531   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.450984   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451384   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.451422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451479   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.453759   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454080   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.454106   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454202   63744 provision.go:143] copyHostCerts
	I1009 20:16:48.454273   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:16:48.454283   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:16:48.454362   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:16:48.454505   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:16:48.454517   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:16:48.454565   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:16:48.454650   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:16:48.454660   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:16:48.454696   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:16:48.454767   63744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.embed-certs-503330 san=[127.0.0.1 192.168.50.97 embed-certs-503330 localhost minikube]
	I1009 20:16:48.669251   63744 provision.go:177] copyRemoteCerts
	I1009 20:16:48.669335   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:16:48.669373   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.671969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.672258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.672629   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.672739   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.672856   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:48.756869   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:16:48.781853   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:16:48.805746   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:16:48.828729   63744 provision.go:87] duration metric: took 380.461988ms to configureAuth
	I1009 20:16:48.828774   63744 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:16:48.828972   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:16:48.829053   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.831590   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.831874   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.831896   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.832085   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.832273   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832411   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832545   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.832664   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.832906   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.832928   63744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:16:49.057643   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:16:49.057673   63744 machine.go:96] duration metric: took 986.233627ms to provisionDockerMachine
	I1009 20:16:49.057686   63744 start.go:293] postStartSetup for "embed-certs-503330" (driver="kvm2")
	I1009 20:16:49.057697   63744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:16:49.057713   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.057985   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:16:49.058013   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.060943   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061314   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.061336   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061544   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.061732   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.061891   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.062024   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.145757   63744 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:16:49.150378   63744 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:16:49.150407   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:16:49.150486   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:16:49.150589   63744 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:16:49.150697   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:16:49.160318   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:49.184297   63744 start.go:296] duration metric: took 126.596407ms for postStartSetup
	I1009 20:16:49.184337   63744 fix.go:56] duration metric: took 18.580262238s for fixHost
	I1009 20:16:49.184374   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.186720   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187020   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.187043   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187243   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.187435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187571   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187689   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.187812   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:49.187993   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:49.188005   63744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:16:49.299573   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505009.274901835
	
	I1009 20:16:49.299591   63744 fix.go:216] guest clock: 1728505009.274901835
	I1009 20:16:49.299610   63744 fix.go:229] Guest: 2024-10-09 20:16:49.274901835 +0000 UTC Remote: 2024-10-09 20:16:49.184353734 +0000 UTC m=+250.856887553 (delta=90.548101ms)
	I1009 20:16:49.299639   63744 fix.go:200] guest clock delta is within tolerance: 90.548101ms
	I1009 20:16:49.299644   63744 start.go:83] releasing machines lock for "embed-certs-503330", held for 18.695596427s
	I1009 20:16:49.299671   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.299949   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:49.302951   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303308   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.303337   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303494   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.303952   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304100   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304164   63744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:16:49.304213   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.304273   63744 ssh_runner.go:195] Run: cat /version.json
	I1009 20:16:49.304295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.306543   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306817   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.306856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306901   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307010   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307196   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307365   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.307387   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.307404   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307518   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.307612   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307778   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307974   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.308128   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.410624   63744 ssh_runner.go:195] Run: systemctl --version
	I1009 20:16:49.418412   63744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:16:49.567318   63744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:16:49.573238   63744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:16:49.573326   63744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:16:49.589269   63744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:16:49.589292   63744 start.go:495] detecting cgroup driver to use...
	I1009 20:16:49.589361   63744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:16:49.606654   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:16:49.621200   63744 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:16:49.621253   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:16:49.635346   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:16:49.649294   63744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:16:49.764096   63744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:16:49.892568   63744 docker.go:233] disabling docker service ...
	I1009 20:16:49.892650   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:16:49.907527   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:16:49.920395   63744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:16:50.067177   63744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:16:50.222407   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:16:50.236968   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:16:50.257005   63744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:16:50.257058   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.269955   63744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:16:50.270011   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.282633   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.296259   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.307683   63744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:16:50.320174   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.331518   63744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.350124   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.361327   63744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:16:50.371637   63744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:16:50.371707   63744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:16:50.385652   63744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:16:50.395762   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:50.521257   63744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:16:50.631377   63744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:16:50.631447   63744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:16:50.636594   63744 start.go:563] Will wait 60s for crictl version
	I1009 20:16:50.636643   63744 ssh_runner.go:195] Run: which crictl
	I1009 20:16:50.640677   63744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:16:50.693612   63744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:16:50.693695   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.724735   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.755820   63744 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:16:49.326372   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Start
	I1009 20:16:49.326507   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring networks are active...
	I1009 20:16:49.327206   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network default is active
	I1009 20:16:49.327553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network mk-default-k8s-diff-port-733270 is active
	I1009 20:16:49.327882   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Getting domain xml...
	I1009 20:16:49.328531   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Creating domain...
	I1009 20:16:50.594895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting to get IP...
	I1009 20:16:50.595715   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596086   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596183   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.596074   65019 retry.go:31] will retry after 205.766462ms: waiting for machine to come up
	I1009 20:16:50.803483   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.803974   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.804004   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.803914   65019 retry.go:31] will retry after 357.132949ms: waiting for machine to come up
	I1009 20:16:51.162582   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163122   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163163   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.163072   65019 retry.go:31] will retry after 316.280977ms: waiting for machine to come up
	I1009 20:16:51.480560   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481080   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481107   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.481029   65019 retry.go:31] will retry after 498.455228ms: waiting for machine to come up
	I1009 20:16:51.980618   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981136   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981165   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.981099   65019 retry.go:31] will retry after 595.314117ms: waiting for machine to come up
	I1009 20:16:50.757146   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:50.759889   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760334   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:50.760365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760613   63744 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 20:16:50.764810   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:50.777746   63744 kubeadm.go:883] updating cluster {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:16:50.777862   63744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:16:50.777926   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:50.816658   63744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:16:50.816722   63744 ssh_runner.go:195] Run: which lz4
	I1009 20:16:50.820880   63744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:16:50.825586   63744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:16:50.825614   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:16:52.206757   63744 crio.go:462] duration metric: took 1.385906608s to copy over tarball
	I1009 20:16:52.206837   63744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:16:52.577801   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578322   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578346   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:52.578269   65019 retry.go:31] will retry after 872.123349ms: waiting for machine to come up
	I1009 20:16:53.452602   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453038   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453068   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:53.452984   65019 retry.go:31] will retry after 727.985471ms: waiting for machine to come up
	I1009 20:16:54.182823   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:54.183181   65019 retry.go:31] will retry after 1.366580369s: waiting for machine to come up
	I1009 20:16:55.551983   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552452   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:55.552365   65019 retry.go:31] will retry after 1.327634108s: waiting for machine to come up
	I1009 20:16:56.881693   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882111   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882143   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:56.882061   65019 retry.go:31] will retry after 1.817770667s: waiting for machine to come up
	I1009 20:16:54.208830   63744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.001963207s)
	I1009 20:16:54.208858   63744 crio.go:469] duration metric: took 2.002072256s to extract the tarball
	I1009 20:16:54.208866   63744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:16:54.244727   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:54.287243   63744 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:16:54.287271   63744 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:16:54.287280   63744 kubeadm.go:934] updating node { 192.168.50.97 8443 v1.31.1 crio true true} ...
	I1009 20:16:54.287407   63744 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-503330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:16:54.287496   63744 ssh_runner.go:195] Run: crio config
	I1009 20:16:54.335950   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:16:54.335972   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:16:54.335992   63744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:16:54.336018   63744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.97 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-503330 NodeName:embed-certs-503330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:16:54.336171   63744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-503330"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:16:54.336230   63744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:16:54.346657   63744 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:16:54.346730   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:16:54.356150   63744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:16:54.372246   63744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:16:54.388168   63744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1009 20:16:54.404739   63744 ssh_runner.go:195] Run: grep 192.168.50.97	control-plane.minikube.internal$ /etc/hosts
	I1009 20:16:54.408599   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:54.421033   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:54.554324   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:54.571469   63744 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330 for IP: 192.168.50.97
	I1009 20:16:54.571493   63744 certs.go:194] generating shared ca certs ...
	I1009 20:16:54.571514   63744 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:54.571702   63744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:16:54.571755   63744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:16:54.571768   63744 certs.go:256] generating profile certs ...
	I1009 20:16:54.571890   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/client.key
	I1009 20:16:54.571977   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key.3496edbe
	I1009 20:16:54.572035   63744 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key
	I1009 20:16:54.572172   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:16:54.572212   63744 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:16:54.572225   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:16:54.572263   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:16:54.572295   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:16:54.572339   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:16:54.572395   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:54.573111   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:16:54.613670   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:16:54.647116   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:16:54.683687   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:16:54.722221   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:16:54.759929   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:16:54.787802   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:16:54.810019   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:16:54.832805   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:16:54.854772   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:16:54.878414   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:16:54.901850   63744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:16:54.918260   63744 ssh_runner.go:195] Run: openssl version
	I1009 20:16:54.923815   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:16:54.934350   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938733   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938799   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.944372   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:16:54.954950   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:16:54.965726   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970021   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970081   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.975568   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:16:54.986392   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:16:54.996852   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001051   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001096   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.006579   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:16:55.017264   63744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:16:55.021893   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:16:55.027729   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:16:55.033714   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:16:55.039641   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:16:55.045236   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:16:55.050855   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:16:55.056748   63744 kubeadm.go:392] StartCluster: {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:55.056833   63744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:16:55.056882   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.098936   63744 cri.go:89] found id: ""
	I1009 20:16:55.099014   63744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:16:55.109556   63744 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:16:55.109579   63744 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:16:55.109625   63744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:16:55.119379   63744 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:16:55.120348   63744 kubeconfig.go:125] found "embed-certs-503330" server: "https://192.168.50.97:8443"
	I1009 20:16:55.122330   63744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:16:55.131900   63744 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.97
	I1009 20:16:55.131927   63744 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:16:55.131936   63744 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:16:55.131978   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.171019   63744 cri.go:89] found id: ""
	I1009 20:16:55.171090   63744 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:16:55.188501   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:16:55.198221   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:16:55.198244   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:16:55.198304   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:16:55.207327   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:16:55.207371   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:16:55.216598   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:16:55.226558   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:16:55.226618   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:16:55.237485   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.246557   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:16:55.246604   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.257542   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:16:55.267040   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:16:55.267116   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:16:55.276472   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:16:55.285774   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:55.402155   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.327441   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.559638   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.623281   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.682538   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:16:56.682638   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.183012   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.682740   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.183107   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.702309   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702787   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702821   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:58.702713   65019 retry.go:31] will retry after 1.927245136s: waiting for machine to come up
	I1009 20:17:00.631448   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631884   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:00.631828   65019 retry.go:31] will retry after 2.288888745s: waiting for machine to come up
	I1009 20:16:58.683664   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.717388   63744 api_server.go:72] duration metric: took 2.034851204s to wait for apiserver process to appear ...
	I1009 20:16:58.717417   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:16:58.717441   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:16:58.717988   63744 api_server.go:269] stopped: https://192.168.50.97:8443/healthz: Get "https://192.168.50.97:8443/healthz": dial tcp 192.168.50.97:8443: connect: connection refused
	I1009 20:16:59.217777   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.473119   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.473153   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.473179   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.549848   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.549880   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.718137   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.722540   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:01.722571   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.217856   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.222606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:02.222638   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.718198   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.723729   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:17:02.729552   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:02.729582   63744 api_server.go:131] duration metric: took 4.01215752s to wait for apiserver health ...
	I1009 20:17:02.729594   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:17:02.729603   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:02.731426   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:02.732669   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:02.743408   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:02.762443   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:02.774604   63744 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:02.774647   63744 system_pods.go:61] "coredns-7c65d6cfc9-df57g" [6d86b5f4-6ab2-4313-9247-f2766bb2cd17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:02.774666   63744 system_pods.go:61] "etcd-embed-certs-503330" [c3d2f07e-3ea7-41ae-9247-0c79e5aeef7f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:02.774685   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [917f81d6-e4fb-41fe-8051-a1c645e35af8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:02.774693   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [d12d9ad5-e80a-4745-ae2d-3f24965de4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:02.774706   63744 system_pods.go:61] "kube-proxy-dsh65" [f027d12a-f0b8-45a9-a73d-1afdd80ef7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:17:02.774718   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [a42cdb71-099c-40a3-b474-ced8659ae391] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:02.774736   63744 system_pods.go:61] "metrics-server-6867b74b74-6z7jj" [58aa0ad3-3210-4722-a579-392688c91bae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:02.774752   63744 system_pods.go:61] "storage-provisioner" [3b0ab765-5bd6-44ac-866e-1c1168ad8ed9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:02.774765   63744 system_pods.go:74] duration metric: took 12.298201ms to wait for pod list to return data ...
	I1009 20:17:02.774777   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:02.785857   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:02.785882   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:02.785892   63744 node_conditions.go:105] duration metric: took 11.107216ms to run NodePressure ...
	I1009 20:17:02.785910   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:03.147197   63744 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150727   63744 kubeadm.go:739] kubelet initialised
	I1009 20:17:03.150746   63744 kubeadm.go:740] duration metric: took 3.5247ms waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150753   63744 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:03.155171   63744 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.160022   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160045   63744 pod_ready.go:82] duration metric: took 4.856483ms for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.160053   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160059   63744 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.165155   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165176   63744 pod_ready.go:82] duration metric: took 5.104415ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.165184   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165190   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.170669   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170684   63744 pod_ready.go:82] duration metric: took 5.48497ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.170691   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170697   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.175025   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175039   63744 pod_ready.go:82] duration metric: took 4.333372ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.175047   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175052   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:02.923370   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923752   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923780   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:02.923727   65019 retry.go:31] will retry after 2.87724378s: waiting for machine to come up
	I1009 20:17:05.803251   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803748   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803774   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:05.803698   65019 retry.go:31] will retry after 5.592307609s: waiting for machine to come up
	I1009 20:17:03.565676   63744 pod_ready.go:93] pod "kube-proxy-dsh65" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:03.565703   63744 pod_ready.go:82] duration metric: took 390.643175ms for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.565715   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:05.574374   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:08.072406   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:11.397365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397813   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Found IP for machine: 192.168.72.134
	I1009 20:17:11.397834   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has current primary IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397840   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserving static IP address...
	I1009 20:17:11.398220   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.398246   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | skip adding static IP to network mk-default-k8s-diff-port-733270 - found existing host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"}
	I1009 20:17:11.398259   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserved static IP address: 192.168.72.134
	I1009 20:17:11.398274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for SSH to be available...
	I1009 20:17:11.398291   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Getting to WaitForSSH function...
	I1009 20:17:11.400217   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400530   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.400553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400649   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH client type: external
	I1009 20:17:11.400675   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa (-rw-------)
	I1009 20:17:11.400710   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:11.400729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | About to run SSH command:
	I1009 20:17:11.400744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | exit 0
	I1009 20:17:11.526822   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:11.527202   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetConfigRaw
	I1009 20:17:11.527838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.530365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530702   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.530729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530978   64109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/config.json ...
	I1009 20:17:11.531187   64109 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:11.531204   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:11.531388   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.533307   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533646   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.533671   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533778   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.533949   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534088   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534181   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.534308   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.534521   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.534535   64109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:11.643315   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:11.643341   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643558   64109 buildroot.go:166] provisioning hostname "default-k8s-diff-port-733270"
	I1009 20:17:11.643580   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643746   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.646369   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646741   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.646771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646919   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.647087   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647249   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647363   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.647495   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.647698   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.647723   64109 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733270 && echo "default-k8s-diff-port-733270" | sudo tee /etc/hostname
	I1009 20:17:11.774094   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733270
	
	I1009 20:17:11.774129   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.776945   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.777318   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777450   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.777637   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777807   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777942   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.778077   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.778265   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.778290   64109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:11.899636   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:11.899666   64109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:11.899712   64109 buildroot.go:174] setting up certificates
	I1009 20:17:11.899729   64109 provision.go:84] configureAuth start
	I1009 20:17:11.899745   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.900007   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.902313   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902620   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.902647   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902783   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.904665   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.904999   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.905028   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.905121   64109 provision.go:143] copyHostCerts
	I1009 20:17:11.905194   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:11.905208   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:11.905274   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:11.905389   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:11.905403   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:11.905433   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:11.905506   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:11.905515   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:11.905543   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:11.905658   64109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733270 san=[127.0.0.1 192.168.72.134 default-k8s-diff-port-733270 localhost minikube]
	I1009 20:17:12.089469   64109 provision.go:177] copyRemoteCerts
	I1009 20:17:12.089537   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:12.089563   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.091929   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092210   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.092242   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092431   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.092601   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.092729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.092822   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.177787   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 20:17:12.201400   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:17:12.225416   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:12.247777   64109 provision.go:87] duration metric: took 348.034794ms to configureAuth
	I1009 20:17:12.247801   64109 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:12.247989   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:12.248077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.250489   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.250849   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.250880   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.251083   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.251281   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251515   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.251786   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.251973   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.251995   64109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:12.475656   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:12.475687   64109 machine.go:96] duration metric: took 944.487945ms to provisionDockerMachine
	I1009 20:17:12.475701   64109 start.go:293] postStartSetup for "default-k8s-diff-port-733270" (driver="kvm2")
	I1009 20:17:12.475714   64109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:12.475730   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.476033   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:12.476070   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.478464   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478809   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.478838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.479077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.479198   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.479330   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.719812   64287 start.go:364] duration metric: took 3m28.002029987s to acquireMachinesLock for "old-k8s-version-169021"
	I1009 20:17:12.719868   64287 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:12.719874   64287 fix.go:54] fixHost starting: 
	I1009 20:17:12.720288   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:12.720338   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:12.736888   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I1009 20:17:12.737330   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:12.737796   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:17:12.737818   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:12.738095   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:12.738284   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:12.738407   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetState
	I1009 20:17:12.740019   64287 fix.go:112] recreateIfNeeded on old-k8s-version-169021: state=Stopped err=<nil>
	I1009 20:17:12.740056   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	W1009 20:17:12.740218   64287 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:12.741971   64287 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-169021" ...
	I1009 20:17:10.572038   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:13.072273   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:12.566216   64109 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:12.570733   64109 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:12.570754   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:12.570811   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:12.570894   64109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:12.571002   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:12.580485   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:12.604494   64109 start.go:296] duration metric: took 128.779636ms for postStartSetup
	I1009 20:17:12.604528   64109 fix.go:56] duration metric: took 23.304740697s for fixHost
	I1009 20:17:12.604547   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.607253   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607579   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.607611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607762   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.607941   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608085   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608190   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.608315   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.608524   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.608542   64109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:12.719641   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505032.674262019
	
	I1009 20:17:12.719663   64109 fix.go:216] guest clock: 1728505032.674262019
	I1009 20:17:12.719672   64109 fix.go:229] Guest: 2024-10-09 20:17:12.674262019 +0000 UTC Remote: 2024-10-09 20:17:12.604532015 +0000 UTC m=+215.141542026 (delta=69.730004ms)
	I1009 20:17:12.719734   64109 fix.go:200] guest clock delta is within tolerance: 69.730004ms
	I1009 20:17:12.719742   64109 start.go:83] releasing machines lock for "default-k8s-diff-port-733270", held for 23.419984544s
	I1009 20:17:12.719771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.720009   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:12.722908   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.723308   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723449   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724041   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724196   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724276   64109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:12.724314   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.724356   64109 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:12.724376   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.726747   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727051   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727098   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727176   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727264   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727555   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.727586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727622   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727681   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.727738   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727865   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727993   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.728110   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.808408   64109 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:12.835630   64109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:12.989949   64109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:12.995824   64109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:12.995893   64109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:13.011680   64109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:13.011707   64109 start.go:495] detecting cgroup driver to use...
	I1009 20:17:13.011774   64109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:13.027110   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:13.040097   64109 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:13.040198   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:13.054001   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:13.068380   64109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:13.190626   64109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:13.367857   64109 docker.go:233] disabling docker service ...
	I1009 20:17:13.367921   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:13.385929   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:13.403253   64109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:13.528117   64109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:13.663611   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:13.679242   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:13.699707   64109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:13.699775   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.710685   64109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:13.710749   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.722116   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.732987   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.744601   64109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:13.755998   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.768759   64109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.788295   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.798784   64109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:13.808745   64109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:13.808810   64109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:13.823798   64109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:13.834854   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:13.959977   64109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:14.071531   64109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:14.071613   64109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:14.077348   64109 start.go:563] Will wait 60s for crictl version
	I1009 20:17:14.077412   64109 ssh_runner.go:195] Run: which crictl
	I1009 20:17:14.081272   64109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:14.120851   64109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:14.120951   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.148588   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.178661   64109 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:12.743057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .Start
	I1009 20:17:12.743249   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring networks are active...
	I1009 20:17:12.743940   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network default is active
	I1009 20:17:12.744263   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network mk-old-k8s-version-169021 is active
	I1009 20:17:12.744639   64287 main.go:141] libmachine: (old-k8s-version-169021) Getting domain xml...
	I1009 20:17:12.745331   64287 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:17:14.013679   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting to get IP...
	I1009 20:17:14.014647   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.015019   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.015101   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.015007   65185 retry.go:31] will retry after 236.047931ms: waiting for machine to come up
	I1009 20:17:14.252239   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.252610   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.252636   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.252568   65185 retry.go:31] will retry after 325.864911ms: waiting for machine to come up
	I1009 20:17:14.580315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.580940   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.580965   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.580878   65185 retry.go:31] will retry after 366.421043ms: waiting for machine to come up
	I1009 20:17:14.179897   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:14.183174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183497   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:14.183529   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183702   64109 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:14.187948   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:14.201218   64109 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:14.201341   64109 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:14.201381   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:14.237137   64109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:14.237210   64109 ssh_runner.go:195] Run: which lz4
	I1009 20:17:14.241492   64109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:14.246237   64109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:14.246270   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:17:15.633127   64109 crio.go:462] duration metric: took 1.391666515s to copy over tarball
	I1009 20:17:15.633221   64109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:15.073427   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.085878   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.574480   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:17.574502   63744 pod_ready.go:82] duration metric: took 14.00878017s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:17.574511   63744 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:14.949258   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.949766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.949800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.949726   65185 retry.go:31] will retry after 498.276481ms: waiting for machine to come up
	I1009 20:17:15.450160   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:15.450601   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:15.450635   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:15.450548   65185 retry.go:31] will retry after 742.118922ms: waiting for machine to come up
	I1009 20:17:16.194707   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.195193   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.195232   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.195137   65185 retry.go:31] will retry after 583.713263ms: waiting for machine to come up
	I1009 20:17:16.780844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.781277   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.781302   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.781215   65185 retry.go:31] will retry after 936.435146ms: waiting for machine to come up
	I1009 20:17:17.719083   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:17.719558   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:17.719588   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:17.719503   65185 retry.go:31] will retry after 1.046822117s: waiting for machine to come up
	I1009 20:17:18.768306   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:18.768844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:18.768872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:18.768798   65185 retry.go:31] will retry after 1.362599959s: waiting for machine to come up
	I1009 20:17:17.738682   64109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10542583s)
	I1009 20:17:17.738724   64109 crio.go:469] duration metric: took 2.105568099s to extract the tarball
	I1009 20:17:17.738733   64109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:17.779611   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:17.834267   64109 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:17:17.834291   64109 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:17:17.834299   64109 kubeadm.go:934] updating node { 192.168.72.134 8444 v1.31.1 crio true true} ...
	I1009 20:17:17.834384   64109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-733270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:17.834449   64109 ssh_runner.go:195] Run: crio config
	I1009 20:17:17.879236   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:17.879265   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:17.879286   64109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:17.879306   64109 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733270 NodeName:default-k8s-diff-port-733270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:17:17.879467   64109 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:17.879531   64109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:17:17.889847   64109 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:17.889945   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:17.899292   64109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1009 20:17:17.915656   64109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:17.931802   64109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1009 20:17:17.949046   64109 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:17.953042   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:17.966741   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:18.099697   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:18.120535   64109 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270 for IP: 192.168.72.134
	I1009 20:17:18.120555   64109 certs.go:194] generating shared ca certs ...
	I1009 20:17:18.120570   64109 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:18.120700   64109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:18.120734   64109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:18.120743   64109 certs.go:256] generating profile certs ...
	I1009 20:17:18.120813   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.key
	I1009 20:17:18.120867   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key.a935be89
	I1009 20:17:18.120910   64109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key
	I1009 20:17:18.121023   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:18.121053   64109 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:18.121065   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:18.121107   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:18.121131   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:18.121165   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:18.121217   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:18.121886   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:18.185147   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:18.221038   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:18.252242   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:18.295828   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 20:17:18.323898   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:18.348575   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:18.372580   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:18.396351   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:18.420726   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:18.444717   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:18.469594   64109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:18.485908   64109 ssh_runner.go:195] Run: openssl version
	I1009 20:17:18.492283   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:18.503167   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507900   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507952   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.513847   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:18.524101   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:18.534793   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539332   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539410   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.545077   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:18.555669   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:18.570727   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576515   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576585   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.582738   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:18.593855   64109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:18.598553   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:18.604755   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:18.611554   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:18.617835   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:18.623671   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:18.629288   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:18.634887   64109 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:18.634994   64109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:18.635040   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.676211   64109 cri.go:89] found id: ""
	I1009 20:17:18.676309   64109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:18.686685   64109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:18.686706   64109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:18.686758   64109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:18.696573   64109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:18.697474   64109 kubeconfig.go:125] found "default-k8s-diff-port-733270" server: "https://192.168.72.134:8444"
	I1009 20:17:18.699424   64109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:18.708661   64109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.134
	I1009 20:17:18.708693   64109 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:18.708705   64109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:18.708756   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.747781   64109 cri.go:89] found id: ""
	I1009 20:17:18.747852   64109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:18.765293   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:18.776296   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:18.776315   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:18.776363   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:17:18.785075   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:18.785132   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:18.794089   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:17:18.802663   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:18.802710   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:18.811834   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.820562   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:18.820611   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.829603   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:17:18.838162   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:18.838214   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:18.847131   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:18.856597   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:18.963398   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.093311   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.129878409s)
	I1009 20:17:20.093347   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.311144   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.405808   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.500323   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:20.500417   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.001420   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.501473   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.000842   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:19.581480   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:22.081200   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:20.133416   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:20.133841   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:20.133872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:20.133789   65185 retry.go:31] will retry after 1.900366713s: waiting for machine to come up
	I1009 20:17:22.036076   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:22.036465   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:22.036499   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:22.036421   65185 retry.go:31] will retry after 2.419471311s: waiting for machine to come up
	I1009 20:17:24.458015   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:24.458410   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:24.458441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:24.458379   65185 retry.go:31] will retry after 2.284501028s: waiting for machine to come up
	I1009 20:17:22.500576   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.517320   64109 api_server.go:72] duration metric: took 2.016990608s to wait for apiserver process to appear ...
	I1009 20:17:22.517349   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:17:22.517371   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.392466   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.392500   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.392516   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.432214   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.432243   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.518413   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.537284   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:25.537328   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.017494   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.022548   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.022581   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.518206   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.523173   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.523198   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:27.017735   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:27.022557   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:17:27.031462   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:27.031486   64109 api_server.go:131] duration metric: took 4.514131072s to wait for apiserver health ...
	I1009 20:17:27.031494   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:27.031500   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:27.033659   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:27.035055   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:27.045141   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:27.062887   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:27.070777   64109 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:27.070810   64109 system_pods.go:61] "coredns-7c65d6cfc9-vz7nx" [c9474b15-ac87-4b81-a239-6f4f3563c708] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:27.070820   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [ef686f1a-21a5-4058-b8ca-6e719415d778] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:27.070833   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [60a13042-6ddb-41c9-993b-a351aad64ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:27.070842   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [d876ca14-7014-4891-965a-83cadccc4416] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:27.070848   64109 system_pods.go:61] "kube-proxy-zr4bl" [4545b380-2d43-415a-97aa-c245a19d8aff] Running
	I1009 20:17:27.070859   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [d2ff89d7-03cf-430c-aa64-278d800d7fa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:27.070870   64109 system_pods.go:61] "metrics-server-6867b74b74-8p24l" [133ac2dc-236a-4ad6-886a-33b132ff5b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:27.070890   64109 system_pods.go:61] "storage-provisioner" [b82a4bd2-62d3-4eee-b17c-c0ae22b2bd3b] Running
	I1009 20:17:27.070902   64109 system_pods.go:74] duration metric: took 7.993626ms to wait for pod list to return data ...
	I1009 20:17:27.070914   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:27.074265   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:27.074290   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:27.074301   64109 node_conditions.go:105] duration metric: took 3.379591ms to run NodePressure ...
	I1009 20:17:27.074327   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:27.337687   64109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342418   64109 kubeadm.go:739] kubelet initialised
	I1009 20:17:27.342438   64109 kubeadm.go:740] duration metric: took 4.72219ms waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342446   64109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:27.347265   64109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.351569   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351587   64109 pod_ready.go:82] duration metric: took 4.298933ms for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.351595   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351600   64109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.355636   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355657   64109 pod_ready.go:82] duration metric: took 4.050576ms for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.355666   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355672   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.359739   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359758   64109 pod_ready.go:82] duration metric: took 4.080099ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.359767   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359773   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.466469   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466514   64109 pod_ready.go:82] duration metric: took 106.729243ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.466530   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466546   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:24.081959   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.581477   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.744084   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:26.744443   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:26.744468   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:26.744421   65185 retry.go:31] will retry after 2.772640247s: waiting for machine to come up
	I1009 20:17:29.519542   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:29.519877   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:29.519897   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:29.519854   65185 retry.go:31] will retry after 5.534511505s: waiting for machine to come up
	I1009 20:17:27.866362   64109 pod_ready.go:93] pod "kube-proxy-zr4bl" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:27.866389   64109 pod_ready.go:82] duration metric: took 399.82454ms for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.866401   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:29.872414   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.872979   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:29.081836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.580784   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.520055   63427 start.go:364] duration metric: took 1m0.914393022s to acquireMachinesLock for "no-preload-480205"
	I1009 20:17:36.520112   63427 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:36.520120   63427 fix.go:54] fixHost starting: 
	I1009 20:17:36.520550   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:36.520590   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:36.541113   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1009 20:17:36.541505   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:36.542133   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:17:36.542161   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:36.542522   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:36.542701   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:36.542849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:17:36.544749   63427 fix.go:112] recreateIfNeeded on no-preload-480205: state=Stopped err=<nil>
	I1009 20:17:36.544774   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	W1009 20:17:36.544962   63427 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:36.546948   63427 out.go:177] * Restarting existing kvm2 VM for "no-preload-480205" ...
	I1009 20:17:34.373083   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.373497   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:35.056703   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057338   64287 main.go:141] libmachine: (old-k8s-version-169021) Found IP for machine: 192.168.61.119
	I1009 20:17:35.057370   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has current primary IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057378   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserving static IP address...
	I1009 20:17:35.057996   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.058019   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserved static IP address: 192.168.61.119
	I1009 20:17:35.058036   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | skip adding static IP to network mk-old-k8s-version-169021 - found existing host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"}
	I1009 20:17:35.058052   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Getting to WaitForSSH function...
	I1009 20:17:35.058069   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting for SSH to be available...
	I1009 20:17:35.060324   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060560   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.060586   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060678   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH client type: external
	I1009 20:17:35.060702   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa (-rw-------)
	I1009 20:17:35.060735   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:35.060750   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | About to run SSH command:
	I1009 20:17:35.060766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | exit 0
	I1009 20:17:35.183369   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:35.183732   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:17:35.184294   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.186404   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186691   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.186728   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186912   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:17:35.187139   64287 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:35.187158   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:35.187361   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.189504   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189784   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.189814   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189904   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.190057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190169   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190309   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.190422   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.190610   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.190626   64287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:35.295510   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:35.295543   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295782   64287 buildroot.go:166] provisioning hostname "old-k8s-version-169021"
	I1009 20:17:35.295804   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295994   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.298548   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.298930   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.298964   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.299120   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.299266   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299418   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299547   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.299737   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.299899   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.299912   64287 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169021 && echo "old-k8s-version-169021" | sudo tee /etc/hostname
	I1009 20:17:35.426217   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169021
	
	I1009 20:17:35.426246   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.428993   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.429348   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429554   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.429728   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.429885   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.430012   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.430164   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.430365   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.430391   64287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:35.544070   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:35.544098   64287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:35.544136   64287 buildroot.go:174] setting up certificates
	I1009 20:17:35.544146   64287 provision.go:84] configureAuth start
	I1009 20:17:35.544155   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.544420   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.547109   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547419   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.547451   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547618   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.549441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549724   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.549757   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549894   64287 provision.go:143] copyHostCerts
	I1009 20:17:35.549945   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:35.549955   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:35.550007   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:35.550109   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:35.550119   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:35.550139   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:35.550201   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:35.550207   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:35.550224   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:35.550274   64287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169021 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-169021]
	I1009 20:17:35.892413   64287 provision.go:177] copyRemoteCerts
	I1009 20:17:35.892470   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:35.892492   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.894921   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895231   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.895262   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895409   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.895585   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.895750   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.895870   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:35.978537   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:36.003667   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:17:36.029724   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:36.053321   64287 provision.go:87] duration metric: took 509.163583ms to configureAuth
	I1009 20:17:36.053347   64287 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:36.053517   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:17:36.053589   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.056411   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.056740   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.056769   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.057023   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.057214   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057396   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057533   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.057684   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.057847   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.057862   64287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:36.281284   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:36.281316   64287 machine.go:96] duration metric: took 1.094164441s to provisionDockerMachine
	I1009 20:17:36.281327   64287 start.go:293] postStartSetup for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:17:36.281339   64287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:36.281386   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.281686   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:36.281711   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.284445   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.284825   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284990   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.285132   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.285255   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.285405   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.370146   64287 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:36.374951   64287 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:36.374972   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:36.375040   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:36.375158   64287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:36.375286   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:36.384857   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:36.407811   64287 start.go:296] duration metric: took 126.472907ms for postStartSetup
	I1009 20:17:36.407852   64287 fix.go:56] duration metric: took 23.68797707s for fixHost
	I1009 20:17:36.407875   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.410584   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.410949   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.410979   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.411118   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.411292   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411461   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411593   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.411768   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.411943   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.411966   64287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:36.519849   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505056.472929841
	
	I1009 20:17:36.519877   64287 fix.go:216] guest clock: 1728505056.472929841
	I1009 20:17:36.519887   64287 fix.go:229] Guest: 2024-10-09 20:17:36.472929841 +0000 UTC Remote: 2024-10-09 20:17:36.407856716 +0000 UTC m=+231.827419064 (delta=65.073125ms)
	I1009 20:17:36.519944   64287 fix.go:200] guest clock delta is within tolerance: 65.073125ms
	I1009 20:17:36.519956   64287 start.go:83] releasing machines lock for "old-k8s-version-169021", held for 23.800110205s
	I1009 20:17:36.520000   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.520321   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:36.523296   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523653   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.523701   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523890   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524453   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524658   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524781   64287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:36.524822   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.524855   64287 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:36.524883   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.527948   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528030   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528336   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528362   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528389   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528414   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528670   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528681   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528874   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.528880   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.529031   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529035   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529170   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.529191   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.634262   64287 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:36.640126   64287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:36.794481   64287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:36.801536   64287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:36.801615   64287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:36.825211   64287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:36.825237   64287 start.go:495] detecting cgroup driver to use...
	I1009 20:17:36.825299   64287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:36.842016   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:36.861052   64287 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:36.861112   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:36.878185   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:36.892044   64287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:37.010989   64287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:37.181313   64287 docker.go:233] disabling docker service ...
	I1009 20:17:37.181373   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:37.201726   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:37.218403   64287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:37.330869   64287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:37.458670   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:37.474832   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:37.496062   64287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:17:37.496111   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.509926   64287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:37.509984   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.527671   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.543857   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.554871   64287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:37.566057   64287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:37.578675   64287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:37.578757   64287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:37.593475   64287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:37.608210   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:37.756273   64287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:37.857693   64287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:37.857759   64287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:37.863522   64287 start.go:563] Will wait 60s for crictl version
	I1009 20:17:37.863561   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:37.868216   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:37.908445   64287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:37.908519   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.939400   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.971447   64287 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:17:36.548231   63427 main.go:141] libmachine: (no-preload-480205) Calling .Start
	I1009 20:17:36.548387   63427 main.go:141] libmachine: (no-preload-480205) Ensuring networks are active...
	I1009 20:17:36.549099   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network default is active
	I1009 20:17:36.549384   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network mk-no-preload-480205 is active
	I1009 20:17:36.549760   63427 main.go:141] libmachine: (no-preload-480205) Getting domain xml...
	I1009 20:17:36.550533   63427 main.go:141] libmachine: (no-preload-480205) Creating domain...
	I1009 20:17:37.839932   63427 main.go:141] libmachine: (no-preload-480205) Waiting to get IP...
	I1009 20:17:37.840843   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:37.841295   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:37.841405   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:37.841286   65353 retry.go:31] will retry after 306.803832ms: waiting for machine to come up
	I1009 20:17:33.581531   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.080661   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:38.083154   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:37.972687   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:37.975928   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976352   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:37.976382   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976637   64287 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:37.980809   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:37.993206   64287 kubeadm.go:883] updating cluster {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:37.993359   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:17:37.993402   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:38.043755   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:38.043813   64287 ssh_runner.go:195] Run: which lz4
	I1009 20:17:38.048189   64287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:38.052553   64287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:38.052584   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:17:38.374526   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.376238   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.874242   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:40.874269   64109 pod_ready.go:82] duration metric: took 13.007861108s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:40.874282   64109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:38.149878   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.150291   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.150317   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.150240   65353 retry.go:31] will retry after 331.657929ms: waiting for machine to come up
	I1009 20:17:38.483773   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.484236   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.484259   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.484184   65353 retry.go:31] will retry after 320.466882ms: waiting for machine to come up
	I1009 20:17:38.806862   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.807342   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.807370   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.807304   65353 retry.go:31] will retry after 515.558491ms: waiting for machine to come up
	I1009 20:17:39.324105   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:39.324656   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:39.324687   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:39.324624   65353 retry.go:31] will retry after 742.624052ms: waiting for machine to come up
	I1009 20:17:40.068871   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.069333   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.069361   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.069242   65353 retry.go:31] will retry after 627.591329ms: waiting for machine to come up
	I1009 20:17:40.698046   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.698539   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.698590   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.698482   65353 retry.go:31] will retry after 1.099340902s: waiting for machine to come up
	I1009 20:17:41.799879   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:41.800304   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:41.800334   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:41.800260   65353 retry.go:31] will retry after 954.068599ms: waiting for machine to come up
	I1009 20:17:42.756258   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:42.756730   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:42.756756   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:42.756692   65353 retry.go:31] will retry after 1.483165135s: waiting for machine to come up
	I1009 20:17:40.581834   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:42.583105   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:39.710338   64287 crio.go:462] duration metric: took 1.662187364s to copy over tarball
	I1009 20:17:39.710411   64287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:42.694067   64287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.983621241s)
	I1009 20:17:42.694097   64287 crio.go:469] duration metric: took 2.98372831s to extract the tarball
	I1009 20:17:42.694106   64287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:42.739749   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:42.782349   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:42.782374   64287 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:42.782447   64287 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.782474   64287 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.782512   64287 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.782544   64287 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:17:42.782549   64287 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.782732   64287 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.782486   64287 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.782788   64287 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.784992   64287 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.785024   64287 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.784995   64287 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.785000   64287 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.785007   64287 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.785070   64287 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:17:42.785030   64287 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.785471   64287 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.936283   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.937808   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.960488   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.971814   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:17:42.977796   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.004153   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.014701   64287 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:17:43.014748   64287 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.014795   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.025133   64287 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:17:43.025170   64287 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.025204   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086484   64287 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:17:43.086512   64287 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:17:43.086532   64287 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.086541   64287 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:17:43.086579   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086581   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.097814   64287 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:17:43.097859   64287 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.097909   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103497   64287 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:17:43.103532   64287 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.103548   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.103569   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103677   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.103745   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.103799   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.105640   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.203854   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.220635   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.220670   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.220793   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.232794   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.232901   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.232905   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.389992   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.390038   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.389991   64287 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:17:43.390081   64287 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.390097   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.390112   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.390166   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.390187   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.390247   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.475244   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:17:43.536485   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:17:43.536569   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.538738   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:17:43.538812   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:17:43.538863   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:17:43.538880   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.597357   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:17:43.597449   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.630702   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.668841   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:17:44.007657   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:44.151174   64287 cache_images.go:92] duration metric: took 1.368780539s to LoadCachedImages
	W1009 20:17:44.151263   64287 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1009 20:17:44.151285   64287 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1009 20:17:44.151432   64287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-169021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:44.151500   64287 ssh_runner.go:195] Run: crio config
	I1009 20:17:44.208126   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:17:44.208148   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:44.208165   64287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:44.208183   64287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169021 NodeName:old-k8s-version-169021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:17:44.208361   64287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-169021"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:44.208437   64287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:17:44.218743   64287 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:44.218813   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:44.228160   64287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 20:17:44.245304   64287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:44.262787   64287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:17:44.280742   64287 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:44.285502   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:44.299434   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:44.427216   64287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:44.445239   64287 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021 for IP: 192.168.61.119
	I1009 20:17:44.445262   64287 certs.go:194] generating shared ca certs ...
	I1009 20:17:44.445282   64287 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:44.445454   64287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:44.445516   64287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:44.445538   64287 certs.go:256] generating profile certs ...
	I1009 20:17:44.445663   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key
	I1009 20:17:44.445728   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192
	I1009 20:17:44.445780   64287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key
	I1009 20:17:44.445920   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:44.445961   64287 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:44.445976   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:44.446008   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:44.446041   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:44.446074   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:44.446130   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:44.446993   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:44.498205   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:44.525945   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:44.572216   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:44.614281   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:17:42.881058   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:45.654206   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.242356   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:44.242846   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:44.242873   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:44.242792   65353 retry.go:31] will retry after 1.589482004s: waiting for machine to come up
	I1009 20:17:45.834679   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:45.835135   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:45.835176   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:45.835093   65353 retry.go:31] will retry after 1.757206304s: waiting for machine to come up
	I1009 20:17:47.593468   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:47.593954   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:47.593987   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:47.593889   65353 retry.go:31] will retry after 2.938063418s: waiting for machine to come up
	I1009 20:17:45.082377   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:47.581271   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.661644   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:44.695246   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:44.719043   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:44.743825   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:44.768013   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:44.793698   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:44.819945   64287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:44.840340   64287 ssh_runner.go:195] Run: openssl version
	I1009 20:17:44.847883   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:44.858853   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863657   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863707   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.871190   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:44.885414   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:44.900030   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904894   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904958   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.912406   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:44.925128   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:44.936358   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940937   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940995   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.946995   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:44.958154   64287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:44.962846   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:44.968749   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:44.974659   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:44.980867   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:44.986827   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:44.992741   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:44.998932   64287 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:44.999030   64287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:44.999107   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.037766   64287 cri.go:89] found id: ""
	I1009 20:17:45.037847   64287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:45.050640   64287 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:45.050661   64287 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:45.050717   64287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:45.061420   64287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:45.062835   64287 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:17:45.063886   64287 kubeconfig.go:62] /home/jenkins/minikube-integration/19780-9412/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169021" cluster setting kubeconfig missing "old-k8s-version-169021" context setting]
	I1009 20:17:45.065224   64287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:45.137319   64287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:45.149285   64287 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1009 20:17:45.149318   64287 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:45.149331   64287 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:45.149386   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.191415   64287 cri.go:89] found id: ""
	I1009 20:17:45.191494   64287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:45.208982   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:45.219143   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:45.219166   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:45.219219   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:17:45.229113   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:45.229199   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:45.239745   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:17:45.249766   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:45.249844   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:45.260185   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.271441   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:45.271500   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.281343   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:17:45.291026   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:45.291094   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:45.301052   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:45.311369   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:45.520151   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.097892   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.359594   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.466328   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.574255   64287 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:46.574365   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.574634   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.074595   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.575187   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.074428   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.880869   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:49.881585   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.381306   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.535997   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:50.536376   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:50.536400   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:50.536340   65353 retry.go:31] will retry after 3.744305095s: waiting for machine to come up
	I1009 20:17:49.581868   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.080469   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:50.575160   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.075457   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.574838   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.075036   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.075071   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.575204   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.074552   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.574415   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.284206   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.284770   63427 main.go:141] libmachine: (no-preload-480205) Found IP for machine: 192.168.39.162
	I1009 20:17:54.284795   63427 main.go:141] libmachine: (no-preload-480205) Reserving static IP address...
	I1009 20:17:54.284809   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has current primary IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.285276   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.285315   63427 main.go:141] libmachine: (no-preload-480205) DBG | skip adding static IP to network mk-no-preload-480205 - found existing host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"}
	I1009 20:17:54.285330   63427 main.go:141] libmachine: (no-preload-480205) Reserved static IP address: 192.168.39.162
	I1009 20:17:54.285344   63427 main.go:141] libmachine: (no-preload-480205) Waiting for SSH to be available...
	I1009 20:17:54.285356   63427 main.go:141] libmachine: (no-preload-480205) DBG | Getting to WaitForSSH function...
	I1009 20:17:54.287561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287809   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.287838   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287920   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH client type: external
	I1009 20:17:54.287947   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa (-rw-------)
	I1009 20:17:54.287988   63427 main.go:141] libmachine: (no-preload-480205) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:54.288001   63427 main.go:141] libmachine: (no-preload-480205) DBG | About to run SSH command:
	I1009 20:17:54.288014   63427 main.go:141] libmachine: (no-preload-480205) DBG | exit 0
	I1009 20:17:54.414835   63427 main.go:141] libmachine: (no-preload-480205) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:54.415251   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetConfigRaw
	I1009 20:17:54.415965   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.418617   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.418968   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.418992   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.419252   63427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/config.json ...
	I1009 20:17:54.419452   63427 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:54.419470   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:54.419664   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.421796   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422088   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.422120   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422233   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.422406   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422550   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422839   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.423013   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.423242   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.423254   63427 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:54.531462   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:54.531497   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531718   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:17:54.531744   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531956   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.534433   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534788   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.534816   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.535138   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535286   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535418   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.535601   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.535774   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.535785   63427 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-480205 && echo "no-preload-480205" | sudo tee /etc/hostname
	I1009 20:17:54.659155   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-480205
	
	I1009 20:17:54.659228   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.661958   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662288   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.662313   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662511   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.662681   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662842   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662987   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.663179   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.663354   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.663370   63427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-480205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-480205/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-480205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:54.779856   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:54.779881   63427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:54.779916   63427 buildroot.go:174] setting up certificates
	I1009 20:17:54.779926   63427 provision.go:84] configureAuth start
	I1009 20:17:54.779935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.780180   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.782673   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783013   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.783045   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783171   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.785450   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785780   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.785807   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785945   63427 provision.go:143] copyHostCerts
	I1009 20:17:54.786024   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:54.786041   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:54.786107   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:54.786282   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:54.786294   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:54.786327   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:54.786402   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:54.786412   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:54.786439   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:54.786503   63427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.no-preload-480205 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-480205]
	I1009 20:17:54.929212   63427 provision.go:177] copyRemoteCerts
	I1009 20:17:54.929265   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:54.929292   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.931970   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932355   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.932402   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932506   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.932693   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.932849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.932979   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.017690   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:55.042746   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:17:55.066760   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:55.094790   63427 provision.go:87] duration metric: took 314.853512ms to configureAuth
	I1009 20:17:55.094830   63427 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:55.095022   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:55.095125   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.097730   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098041   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.098078   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098257   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.098452   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098647   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098764   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.098926   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.099111   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.099129   63427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:55.325505   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:55.325552   63427 machine.go:96] duration metric: took 906.085773ms to provisionDockerMachine
	I1009 20:17:55.325565   63427 start.go:293] postStartSetup for "no-preload-480205" (driver="kvm2")
	I1009 20:17:55.325576   63427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:55.325596   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.325884   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:55.325911   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.328326   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328595   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.328622   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.328920   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.329086   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.329197   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.413322   63427 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:55.417428   63427 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:55.417451   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:55.417531   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:55.417634   63427 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:55.417758   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:55.426893   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:55.451335   63427 start.go:296] duration metric: took 125.757549ms for postStartSetup
	I1009 20:17:55.451372   63427 fix.go:56] duration metric: took 18.931252408s for fixHost
	I1009 20:17:55.451395   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.453854   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454177   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.454222   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454403   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.454581   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454734   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454872   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.455026   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.455241   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.455254   63427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:55.564201   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505075.515960663
	
	I1009 20:17:55.564224   63427 fix.go:216] guest clock: 1728505075.515960663
	I1009 20:17:55.564232   63427 fix.go:229] Guest: 2024-10-09 20:17:55.515960663 +0000 UTC Remote: 2024-10-09 20:17:55.451376872 +0000 UTC m=+362.436821917 (delta=64.583791ms)
	I1009 20:17:55.564249   63427 fix.go:200] guest clock delta is within tolerance: 64.583791ms
	I1009 20:17:55.564254   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 19.044164758s
	I1009 20:17:55.564274   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.564496   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:55.567139   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567524   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.567561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567654   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568134   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568307   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568372   63427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:55.568415   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.568499   63427 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:55.568524   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.571019   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571293   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571450   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571475   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571592   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571724   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571746   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.571897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571898   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572039   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.572048   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.572151   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572272   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.651437   63427 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:55.678289   63427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:55.826507   63427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:55.832338   63427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:55.832394   63427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:55.849232   63427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:55.849252   63427 start.go:495] detecting cgroup driver to use...
	I1009 20:17:55.849312   63427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:55.865490   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:55.880814   63427 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:55.880881   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:55.895380   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:55.911341   63427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:56.029690   63427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:56.206998   63427 docker.go:233] disabling docker service ...
	I1009 20:17:56.207078   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:56.223617   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:56.236949   63427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:56.357461   63427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:56.472412   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:56.486622   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:56.505189   63427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:56.505273   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.515661   63427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:56.515714   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.525699   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.535795   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.545864   63427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:56.555956   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.565864   63427 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.584950   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.596337   63427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:56.605878   63427 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:56.605945   63427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:56.618105   63427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:56.627474   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:56.763925   63427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:56.866705   63427 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:56.866766   63427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:56.871946   63427 start.go:563] Will wait 60s for crictl version
	I1009 20:17:56.871990   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:56.875978   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:56.920375   63427 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:56.920497   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.950584   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.983562   63427 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:54.883016   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:57.380454   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.984723   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:56.987544   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.987870   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:56.987896   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.988102   63427 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:56.992229   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:57.005052   63427 kubeadm.go:883] updating cluster {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:57.005203   63427 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:57.005261   63427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:57.048383   63427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:57.048405   63427 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:57.048449   63427 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.048493   63427 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.048528   63427 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.048551   63427 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1009 20:17:57.048554   63427 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.048460   63427 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.048669   63427 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.048543   63427 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049897   63427 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.049914   63427 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049917   63427 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.049899   63427 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.049966   63427 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.049968   63427 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1009 20:17:57.210906   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.216003   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.221539   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.238277   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.249962   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.251926   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.264094   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1009 20:17:57.278956   63427 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1009 20:17:57.279003   63427 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.279053   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.326574   63427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1009 20:17:57.326623   63427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.326667   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.356980   63427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1009 20:17:57.356999   63427 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1009 20:17:57.357024   63427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.357028   63427 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.357079   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.357082   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394166   63427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1009 20:17:57.394211   63427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.394308   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394202   63427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1009 20:17:57.394363   63427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.394409   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.504627   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.504669   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.504677   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.504795   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.504866   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.504808   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.653815   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.653864   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.653922   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.653938   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.653976   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.654008   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798466   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798526   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.798603   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.798638   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.798712   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.798725   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.919528   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1009 20:17:57.919602   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1009 20:17:57.919636   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.919668   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:17:57.923759   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1009 20:17:57.923835   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1009 20:17:57.923861   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1009 20:17:57.923841   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:17:57.923900   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:17:57.923908   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1009 20:17:57.923937   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:17:57.923979   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:17:57.933344   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1009 20:17:57.933364   63427 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.933384   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1009 20:17:57.933397   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.936970   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1009 20:17:57.937013   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1009 20:17:57.937014   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1009 20:17:57.937039   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1009 20:17:54.082018   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.581605   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:55.074932   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:55.575354   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.074536   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.575341   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.074580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.574737   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.074743   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.574712   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.074570   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.575178   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.381986   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.879741   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:58.234930   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.729993   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.796562811s)
	I1009 20:18:01.730032   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1009 20:18:01.730055   63427 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730053   63427 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.495090196s)
	I1009 20:18:01.730094   63427 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1009 20:18:01.730108   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730128   63427 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.730171   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:59.082693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.581215   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:00.075413   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:00.575344   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.074463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.574495   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.075077   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.074427   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.574544   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.075436   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.575477   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.881048   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.881675   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:03.709225   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.979095477s)
	I1009 20:18:03.709263   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1009 20:18:03.709270   63427 ssh_runner.go:235] Completed: which crictl: (1.979078895s)
	I1009 20:18:03.709293   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709328   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709331   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677348   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.967992224s)
	I1009 20:18:05.677442   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677451   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.968100259s)
	I1009 20:18:05.677472   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1009 20:18:05.677506   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.677576   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.717053   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:07.172029   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.454939952s)
	I1009 20:18:07.172088   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 20:18:07.172034   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.49443869s)
	I1009 20:18:07.172161   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1009 20:18:07.172184   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:07.172184   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:07.172274   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:03.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:06.082185   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.075031   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:05.574523   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.075121   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.575359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.074417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.574532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.075315   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.575052   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.075089   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.575013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.881820   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:09.882824   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:12.381749   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:08.827862   63427 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.655655014s)
	I1009 20:18:08.827897   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.655597185s)
	I1009 20:18:08.827906   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1009 20:18:08.827911   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1009 20:18:08.827943   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:08.828002   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:11.127762   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.299736339s)
	I1009 20:18:11.127795   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1009 20:18:11.127828   63427 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.127896   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.778998   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 20:18:11.779046   63427 cache_images.go:123] Successfully loaded all cached images
	I1009 20:18:11.779052   63427 cache_images.go:92] duration metric: took 14.730635989s to LoadCachedImages
	I1009 20:18:11.779086   63427 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.1 crio true true} ...
	I1009 20:18:11.779200   63427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-480205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:18:11.779290   63427 ssh_runner.go:195] Run: crio config
	I1009 20:18:11.823810   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:11.823835   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:11.823850   63427 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:18:11.823868   63427 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-480205 NodeName:no-preload-480205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:18:11.823998   63427 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-480205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:18:11.824053   63427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:18:11.834380   63427 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:18:11.834447   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:18:11.843217   63427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:18:11.860171   63427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:18:11.877082   63427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1009 20:18:11.894719   63427 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1009 20:18:11.898508   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:11.910913   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:12.036793   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:12.054850   63427 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205 for IP: 192.168.39.162
	I1009 20:18:12.054872   63427 certs.go:194] generating shared ca certs ...
	I1009 20:18:12.054891   63427 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:12.055079   63427 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:18:12.055135   63427 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:18:12.055147   63427 certs.go:256] generating profile certs ...
	I1009 20:18:12.055233   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.key
	I1009 20:18:12.055290   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key.d4bac337
	I1009 20:18:12.055346   63427 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key
	I1009 20:18:12.055484   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:18:12.055518   63427 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:18:12.055531   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:18:12.055563   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:18:12.055589   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:18:12.055622   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:18:12.055685   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:18:12.056362   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:18:12.098363   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:18:12.138215   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:18:12.163505   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:18:12.197000   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:18:12.226922   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:18:12.260018   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:18:12.283078   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:18:12.306681   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:18:12.329290   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:12.351909   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:18:12.374738   63427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:12.392628   63427 ssh_runner.go:195] Run: openssl version
	I1009 20:18:12.398243   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:18:12.408796   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413145   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413227   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.419056   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:12.429807   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:12.440638   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445248   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445304   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.450971   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:12.461763   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:18:12.472078   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476832   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476883   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.482732   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:12.493739   63427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:12.498128   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:18:12.504533   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:18:12.510838   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:18:12.517106   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:18:12.522836   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:18:12.528387   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:18:12.533860   63427 kubeadm.go:392] StartCluster: {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:12.533939   63427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:12.533974   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.573392   63427 cri.go:89] found id: ""
	I1009 20:18:12.573459   63427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:12.584594   63427 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:18:12.584615   63427 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:18:12.584660   63427 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:18:12.595656   63427 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:18:12.596797   63427 kubeconfig.go:125] found "no-preload-480205" server: "https://192.168.39.162:8443"
	I1009 20:18:12.598877   63427 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:18:12.608274   63427 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1009 20:18:12.608299   63427 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:18:12.608310   63427 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:18:12.608369   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.644925   63427 cri.go:89] found id: ""
	I1009 20:18:12.644992   63427 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:18:12.661468   63427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:18:12.671087   63427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:18:12.671107   63427 kubeadm.go:157] found existing configuration files:
	
	I1009 20:18:12.671152   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:18:12.679852   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:18:12.679915   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:18:12.688829   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:18:12.697279   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:18:12.697334   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:18:12.705785   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.714620   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:18:12.714657   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.722966   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:18:12.730999   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:18:12.731047   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:18:12.739970   63427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:18:12.748980   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:12.857890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:08.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:11.081976   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:10.075093   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.574417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.075214   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.574669   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.075388   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.575377   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.075087   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.574793   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.074494   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.574845   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.880777   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:17.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:13.727010   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:13.942433   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.021021   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.144829   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:18:14.144918   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.645875   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.145872   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.184998   63427 api_server.go:72] duration metric: took 1.040165861s to wait for apiserver process to appear ...
	I1009 20:18:15.185034   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:18:15.185059   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:15.185680   63427 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I1009 20:18:15.685984   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:13.581243   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:16.079884   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:18.081998   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:15.074778   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.575349   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.074510   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.074650   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.574725   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.075359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.575302   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.074611   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.575097   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.286022   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.286048   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.286066   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.311734   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.311764   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.685256   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.689903   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:18.689930   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.185432   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.191636   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:19.191661   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.685910   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.690518   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:18:19.696742   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:18:19.696769   63427 api_server.go:131] duration metric: took 4.511726583s to wait for apiserver health ...
	I1009 20:18:19.696777   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:19.696783   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:19.698684   63427 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:18:19.700003   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:18:19.712555   63427 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:18:19.731708   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:18:19.740770   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:18:19.740800   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:19.740808   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:19.740817   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:19.740823   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:19.740829   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:18:19.740835   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:19.740842   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:18:19.740848   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:18:19.740860   63427 system_pods.go:74] duration metric: took 9.132657ms to wait for pod list to return data ...
	I1009 20:18:19.740867   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:18:19.744292   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:18:19.744314   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:18:19.744329   63427 node_conditions.go:105] duration metric: took 3.45695ms to run NodePressure ...
	I1009 20:18:19.744346   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:20.036577   63427 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040661   63427 kubeadm.go:739] kubelet initialised
	I1009 20:18:20.040683   63427 kubeadm.go:740] duration metric: took 4.08281ms waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040692   63427 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:20.047699   63427 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.052483   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052504   63427 pod_ready.go:82] duration metric: took 4.782367ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.052511   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052518   63427 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.056863   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056892   63427 pod_ready.go:82] duration metric: took 4.363688ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.056903   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056911   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.061762   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061786   63427 pod_ready.go:82] duration metric: took 4.867975ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.061796   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061804   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.135742   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135769   63427 pod_ready.go:82] duration metric: took 73.952718ms for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.135779   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135785   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.534419   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534449   63427 pod_ready.go:82] duration metric: took 398.656543ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.534459   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534466   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.935390   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935416   63427 pod_ready.go:82] duration metric: took 400.943577ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.935426   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935432   63427 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:21.336052   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336081   63427 pod_ready.go:82] duration metric: took 400.640044ms for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:21.336093   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336102   63427 pod_ready.go:39] duration metric: took 1.295400779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:21.336122   63427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:18:21.349596   63427 ops.go:34] apiserver oom_adj: -16
	I1009 20:18:21.349616   63427 kubeadm.go:597] duration metric: took 8.764995466s to restartPrimaryControlPlane
	I1009 20:18:21.349624   63427 kubeadm.go:394] duration metric: took 8.815768617s to StartCluster
	I1009 20:18:21.349639   63427 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.349716   63427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:18:21.351335   63427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.351607   63427 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:21.351692   63427 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:21.351813   63427 addons.go:69] Setting storage-provisioner=true in profile "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting metrics-server=true in profile "no-preload-480205"
	I1009 20:18:21.351832   63427 addons.go:234] Setting addon storage-provisioner=true in "no-preload-480205"
	I1009 20:18:21.351836   63427 addons.go:234] Setting addon metrics-server=true in "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting default-storageclass=true in profile "no-preload-480205"
	I1009 20:18:21.351845   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:18:21.351883   63427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-480205"
	W1009 20:18:21.351840   63427 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:18:21.351986   63427 host.go:66] Checking if "no-preload-480205" exists ...
	W1009 20:18:21.351843   63427 addons.go:243] addon metrics-server should already be in state true
	I1009 20:18:21.352071   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.352345   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352389   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352398   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352424   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352457   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352489   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.353957   63427 out.go:177] * Verifying Kubernetes components...
	I1009 20:18:21.355218   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:21.371429   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I1009 20:18:21.371808   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.372342   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.372372   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.372777   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.372988   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.376878   63427 addons.go:234] Setting addon default-storageclass=true in "no-preload-480205"
	W1009 20:18:21.376899   63427 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:18:21.376926   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.377284   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.377323   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.390054   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I1009 20:18:21.390616   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I1009 20:18:21.391127   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391270   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391803   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.391830   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392008   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.392033   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392208   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392359   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392734   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.392776   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.392957   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.393001   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.397090   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1009 20:18:21.397605   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.398086   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.398105   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.398405   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.398921   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.398966   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.408719   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1009 20:18:21.408929   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1009 20:18:21.409048   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409326   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409582   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409594   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409876   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409893   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409956   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410100   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.410223   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410564   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.412097   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.412300   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.414239   63427 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:21.414326   63427 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:18:19.381608   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.415507   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:18:21.415525   63427 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.415530   63427 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:18:21.415536   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:21.415548   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.415549   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.417045   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I1009 20:18:21.417788   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.418610   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.418626   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.418981   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419016   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.419279   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.419611   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.419631   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419760   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.419897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.420028   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.420123   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.420454   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420758   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.420943   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.420963   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420969   63427 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.420989   63427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:21.421002   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.421193   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.421373   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.421545   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.421675   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.423520   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425058   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.425099   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.425124   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425247   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.425381   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.425511   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.558337   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:21.587934   63427 node_ready.go:35] waiting up to 6m0s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:21.692866   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.705177   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:18:21.705201   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:18:21.724872   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.796761   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:18:21.796789   63427 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:18:21.846162   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:21.846187   63427 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:18:21.880785   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:22.146852   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.146879   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147190   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147241   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147254   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.147266   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.147280   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147532   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147534   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147591   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.161873   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.161893   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.162134   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.162156   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.162162   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966531   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24162682s)
	I1009 20:18:22.966588   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966603   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966536   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.085706223s)
	I1009 20:18:22.966699   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966712   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966892   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.966932   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.966939   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966947   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966954   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967001   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967020   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967040   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967073   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.967086   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967234   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967258   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967332   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967342   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967356   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967365   63427 addons.go:475] Verifying addon metrics-server=true in "no-preload-480205"
	I1009 20:18:22.969240   63427 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1009 20:18:22.970479   63427 addons.go:510] duration metric: took 1.618800365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1009 20:18:20.580980   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:22.581407   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:20.075155   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:20.575362   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.074859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.574637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.074532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.574916   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.075357   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.574640   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.074579   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.574711   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.879983   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:26.380696   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:23.592071   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:26.091763   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:24.581861   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:27.082730   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:25.075032   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:25.575412   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.075470   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.574434   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.074827   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.074653   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.575222   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.075440   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.575192   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.880597   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:28.592011   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:29.091688   63427 node_ready.go:49] node "no-preload-480205" has status "Ready":"True"
	I1009 20:18:29.091710   63427 node_ready.go:38] duration metric: took 7.503746219s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:29.091719   63427 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:29.097050   63427 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101164   63427 pod_ready.go:93] pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.101185   63427 pod_ready.go:82] duration metric: took 4.107489ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101195   63427 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105318   63427 pod_ready.go:93] pod "etcd-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.105337   63427 pod_ready.go:82] duration metric: took 4.133854ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105348   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108895   63427 pod_ready.go:93] pod "kube-apiserver-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.108910   63427 pod_ready.go:82] duration metric: took 3.556306ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108920   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.114777   63427 pod_ready.go:103] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.615669   63427 pod_ready.go:93] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.615692   63427 pod_ready.go:82] duration metric: took 2.506765342s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.615703   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620649   63427 pod_ready.go:93] pod "kube-proxy-vbpbk" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.620670   63427 pod_ready.go:82] duration metric: took 4.959968ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620682   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892060   63427 pod_ready.go:93] pod "kube-scheduler-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.892081   63427 pod_ready.go:82] duration metric: took 271.38787ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892089   63427 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.580683   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.581273   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.075304   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:30.574688   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.075159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.574404   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.074889   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.575136   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.074459   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.574779   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.074797   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.574832   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.380854   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.880599   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.899462   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.397489   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.582344   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.081582   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.074501   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:35.574403   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.075399   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.575034   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.074714   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.574446   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.074619   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.574644   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.074530   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.574700   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.881601   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.380041   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.380712   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.397848   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.398202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.400630   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.582883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:41.080905   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.074863   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:40.575174   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.075008   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.574859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.074972   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.574851   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.074805   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.575033   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.074718   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.575423   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.880876   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.881328   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:44.898897   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:47.399335   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:43.581383   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.081078   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:48.081422   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:45.074591   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:45.575195   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.075303   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.575186   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:46.575288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:46.614320   64287 cri.go:89] found id: ""
	I1009 20:18:46.614343   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.614351   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:46.614357   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:46.614402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:46.646355   64287 cri.go:89] found id: ""
	I1009 20:18:46.646384   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.646395   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:46.646403   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:46.646450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:46.678758   64287 cri.go:89] found id: ""
	I1009 20:18:46.678788   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.678798   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:46.678805   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:46.678859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:46.721469   64287 cri.go:89] found id: ""
	I1009 20:18:46.721496   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.721507   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:46.721514   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:46.721573   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:46.759822   64287 cri.go:89] found id: ""
	I1009 20:18:46.759853   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.759861   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:46.759866   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:46.759923   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:46.798221   64287 cri.go:89] found id: ""
	I1009 20:18:46.798250   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.798261   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:46.798268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:46.798327   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:46.832044   64287 cri.go:89] found id: ""
	I1009 20:18:46.832067   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.832075   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:46.832080   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:46.832143   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:46.865003   64287 cri.go:89] found id: ""
	I1009 20:18:46.865030   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.865041   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:46.865051   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:46.865066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:46.916927   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:46.916964   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:46.930547   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:46.930576   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:47.042476   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:47.042501   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:47.042516   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:47.116701   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:47.116732   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:48.888593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:51.380593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.899106   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:52.397825   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:50.580775   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:53.081256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.659335   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:49.672837   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:49.672906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:49.709722   64287 cri.go:89] found id: ""
	I1009 20:18:49.709750   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.709761   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:49.709769   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:49.709827   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:49.741187   64287 cri.go:89] found id: ""
	I1009 20:18:49.741209   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.741216   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:49.741221   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:49.741278   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:49.782564   64287 cri.go:89] found id: ""
	I1009 20:18:49.782593   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.782603   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:49.782610   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:49.782667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:49.820586   64287 cri.go:89] found id: ""
	I1009 20:18:49.820618   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.820628   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:49.820634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:49.820688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:49.854573   64287 cri.go:89] found id: ""
	I1009 20:18:49.854600   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.854608   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:49.854615   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:49.854672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:49.889947   64287 cri.go:89] found id: ""
	I1009 20:18:49.889976   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.889986   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:49.889993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:49.890049   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:49.925309   64287 cri.go:89] found id: ""
	I1009 20:18:49.925339   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.925350   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:49.925357   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:49.925432   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:49.961993   64287 cri.go:89] found id: ""
	I1009 20:18:49.962019   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.962029   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:49.962039   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:49.962053   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:50.051610   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:50.051642   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:50.092363   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:50.092388   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:50.145606   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:50.145639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:50.160017   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:50.160047   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:50.231984   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:52.733040   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:52.748018   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:52.748075   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:52.789413   64287 cri.go:89] found id: ""
	I1009 20:18:52.789440   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.789452   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:52.789458   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:52.789514   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:52.823188   64287 cri.go:89] found id: ""
	I1009 20:18:52.823219   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.823229   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:52.823237   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:52.823305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:52.858675   64287 cri.go:89] found id: ""
	I1009 20:18:52.858704   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.858716   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:52.858724   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:52.858782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:52.893243   64287 cri.go:89] found id: ""
	I1009 20:18:52.893277   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.893287   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:52.893295   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:52.893363   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:52.928209   64287 cri.go:89] found id: ""
	I1009 20:18:52.928240   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.928248   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:52.928255   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:52.928314   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:52.962418   64287 cri.go:89] found id: ""
	I1009 20:18:52.962446   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.962455   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:52.962461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:52.962510   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:52.996276   64287 cri.go:89] found id: ""
	I1009 20:18:52.996304   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.996315   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:52.996322   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:52.996380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:53.029693   64287 cri.go:89] found id: ""
	I1009 20:18:53.029718   64287 logs.go:282] 0 containers: []
	W1009 20:18:53.029728   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:53.029738   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:53.029752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:53.042690   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:53.042713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:53.114114   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:53.114132   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:53.114143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:53.192280   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:53.192314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:53.230392   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:53.230416   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:53.380621   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.881245   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:54.399437   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:56.900141   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.580802   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:58.082285   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.781562   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:55.795951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:55.796017   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:55.836037   64287 cri.go:89] found id: ""
	I1009 20:18:55.836065   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.836074   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:55.836080   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:55.836126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:55.870534   64287 cri.go:89] found id: ""
	I1009 20:18:55.870564   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.870574   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:55.870580   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:55.870647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:55.906415   64287 cri.go:89] found id: ""
	I1009 20:18:55.906438   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.906447   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:55.906454   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:55.906507   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:55.943387   64287 cri.go:89] found id: ""
	I1009 20:18:55.943414   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.943424   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:55.943431   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:55.943489   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:55.977004   64287 cri.go:89] found id: ""
	I1009 20:18:55.977027   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.977036   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:55.977044   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:55.977120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:56.015608   64287 cri.go:89] found id: ""
	I1009 20:18:56.015634   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.015648   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:56.015654   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:56.015703   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:56.049324   64287 cri.go:89] found id: ""
	I1009 20:18:56.049355   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.049366   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:56.049375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:56.049428   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:56.084914   64287 cri.go:89] found id: ""
	I1009 20:18:56.084937   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.084946   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:56.084955   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:56.084975   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:56.098176   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:56.098197   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:56.178386   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:56.178403   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:56.178414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:56.256547   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:56.256582   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:56.294138   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:56.294170   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:58.851568   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:58.865845   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:58.865902   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:58.904144   64287 cri.go:89] found id: ""
	I1009 20:18:58.904169   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.904177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:58.904194   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:58.904267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:58.936739   64287 cri.go:89] found id: ""
	I1009 20:18:58.936769   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.936780   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:58.936790   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:58.936848   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:58.971592   64287 cri.go:89] found id: ""
	I1009 20:18:58.971623   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.971631   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:58.971638   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:58.971690   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:59.007176   64287 cri.go:89] found id: ""
	I1009 20:18:59.007205   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.007228   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:59.007234   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:59.007283   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:59.041760   64287 cri.go:89] found id: ""
	I1009 20:18:59.041789   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.041800   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:59.041807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:59.041865   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:59.077912   64287 cri.go:89] found id: ""
	I1009 20:18:59.077940   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.077951   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:59.077958   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:59.078014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:59.110669   64287 cri.go:89] found id: ""
	I1009 20:18:59.110701   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.110712   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:59.110720   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:59.110799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:59.144869   64287 cri.go:89] found id: ""
	I1009 20:18:59.144897   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.144907   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:59.144917   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:59.144952   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:59.229014   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:59.229054   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:59.272687   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:59.272725   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:59.328090   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:59.328123   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:59.342264   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:59.342294   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:59.419880   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:58.379973   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.381314   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.382266   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:59.398378   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.898047   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.581003   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.581660   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.920869   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:01.933620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:01.933685   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:01.967549   64287 cri.go:89] found id: ""
	I1009 20:19:01.967577   64287 logs.go:282] 0 containers: []
	W1009 20:19:01.967585   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:01.967590   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:01.967675   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:02.005465   64287 cri.go:89] found id: ""
	I1009 20:19:02.005491   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.005500   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:02.005505   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:02.005558   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:02.038140   64287 cri.go:89] found id: ""
	I1009 20:19:02.038162   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.038170   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:02.038176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:02.038219   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:02.070394   64287 cri.go:89] found id: ""
	I1009 20:19:02.070423   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.070434   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:02.070442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:02.070505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:02.110634   64287 cri.go:89] found id: ""
	I1009 20:19:02.110655   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.110663   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:02.110669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:02.110723   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:02.166408   64287 cri.go:89] found id: ""
	I1009 20:19:02.166445   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.166457   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:02.166467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:02.166541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:02.218816   64287 cri.go:89] found id: ""
	I1009 20:19:02.218846   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.218856   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:02.218862   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:02.218914   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:02.265090   64287 cri.go:89] found id: ""
	I1009 20:19:02.265118   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.265130   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:02.265140   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:02.265156   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:02.278134   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:02.278160   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:02.348422   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:02.348453   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:02.348467   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:02.429614   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:02.429651   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:02.469100   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:02.469132   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:04.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.881374   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:04.397774   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.402923   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.081386   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:07.580670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.020914   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:05.034760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:05.034833   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:05.071078   64287 cri.go:89] found id: ""
	I1009 20:19:05.071109   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.071120   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:05.071128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:05.071190   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:05.105517   64287 cri.go:89] found id: ""
	I1009 20:19:05.105545   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.105553   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:05.105558   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:05.105607   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:05.139601   64287 cri.go:89] found id: ""
	I1009 20:19:05.139624   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.139632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:05.139637   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:05.139682   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:05.174329   64287 cri.go:89] found id: ""
	I1009 20:19:05.174351   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.174359   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:05.174365   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:05.174410   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:05.212336   64287 cri.go:89] found id: ""
	I1009 20:19:05.212368   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.212377   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:05.212383   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:05.212464   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:05.251822   64287 cri.go:89] found id: ""
	I1009 20:19:05.251844   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.251851   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:05.251857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:05.251901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:05.291055   64287 cri.go:89] found id: ""
	I1009 20:19:05.291097   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.291106   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:05.291111   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:05.291160   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:05.327223   64287 cri.go:89] found id: ""
	I1009 20:19:05.327248   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.327256   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:05.327266   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:05.327281   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:05.377047   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:05.377086   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:05.391232   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:05.391263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:05.464815   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:05.464837   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:05.464850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:05.542581   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:05.542616   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:08.084504   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:08.100466   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:08.100535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:08.138451   64287 cri.go:89] found id: ""
	I1009 20:19:08.138481   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.138489   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:08.138494   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:08.138551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:08.176839   64287 cri.go:89] found id: ""
	I1009 20:19:08.176867   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.176877   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:08.176884   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:08.176941   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:08.234435   64287 cri.go:89] found id: ""
	I1009 20:19:08.234461   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.234472   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:08.234479   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:08.234544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:08.270727   64287 cri.go:89] found id: ""
	I1009 20:19:08.270753   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.270764   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:08.270771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:08.270831   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:08.305139   64287 cri.go:89] found id: ""
	I1009 20:19:08.305167   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.305177   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:08.305185   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:08.305237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:08.338153   64287 cri.go:89] found id: ""
	I1009 20:19:08.338197   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.338209   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:08.338217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:08.338272   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:08.376046   64287 cri.go:89] found id: ""
	I1009 20:19:08.376073   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.376081   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:08.376087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:08.376144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:08.416555   64287 cri.go:89] found id: ""
	I1009 20:19:08.416595   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.416606   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:08.416617   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:08.416630   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:08.470868   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:08.470898   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:08.486601   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:08.486623   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:08.563325   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:08.563363   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:08.563378   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:08.643743   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:08.643778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:09.380849   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.881773   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:08.898969   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.399277   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:09.580913   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.581693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.197637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:11.210992   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:11.211078   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:11.248309   64287 cri.go:89] found id: ""
	I1009 20:19:11.248331   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.248339   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:11.248345   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:11.248388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:11.282511   64287 cri.go:89] found id: ""
	I1009 20:19:11.282537   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.282546   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:11.282551   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:11.282603   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:11.319447   64287 cri.go:89] found id: ""
	I1009 20:19:11.319473   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.319480   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:11.319486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:11.319543   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:11.353838   64287 cri.go:89] found id: ""
	I1009 20:19:11.353866   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.353879   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:11.353887   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:11.353951   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:11.395257   64287 cri.go:89] found id: ""
	I1009 20:19:11.395288   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.395300   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:11.395309   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:11.395373   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:11.434406   64287 cri.go:89] found id: ""
	I1009 20:19:11.434430   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.434438   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:11.434445   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:11.434506   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:11.468162   64287 cri.go:89] found id: ""
	I1009 20:19:11.468184   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.468192   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:11.468197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:11.468252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:11.500214   64287 cri.go:89] found id: ""
	I1009 20:19:11.500247   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.500257   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:11.500267   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:11.500282   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:11.566430   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:11.566449   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:11.566463   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:11.642784   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:11.642815   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:11.680882   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:11.680908   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:11.731386   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:11.731414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.245696   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:14.258882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:14.258948   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:14.293339   64287 cri.go:89] found id: ""
	I1009 20:19:14.293365   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.293372   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:14.293379   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:14.293424   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:14.327246   64287 cri.go:89] found id: ""
	I1009 20:19:14.327268   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.327275   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:14.327287   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:14.327334   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:14.366384   64287 cri.go:89] found id: ""
	I1009 20:19:14.366412   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.366423   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:14.366430   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:14.366498   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:14.403913   64287 cri.go:89] found id: ""
	I1009 20:19:14.403950   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.403958   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:14.403965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:14.404021   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:14.442655   64287 cri.go:89] found id: ""
	I1009 20:19:14.442684   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.442694   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:14.442702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:14.442749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:14.477895   64287 cri.go:89] found id: ""
	I1009 20:19:14.477921   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.477928   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:14.477934   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:14.477979   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:14.512833   64287 cri.go:89] found id: ""
	I1009 20:19:14.512871   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.512882   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:14.512889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:14.512955   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:14.546557   64287 cri.go:89] found id: ""
	I1009 20:19:14.546582   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.546590   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:14.546597   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:14.546610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:14.599579   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:14.599610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.613347   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:14.613371   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:14.380816   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.879793   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.399353   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:15.899223   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.584162   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.081179   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:14.689272   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:14.689295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:14.689306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:14.770362   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:14.770394   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:17.312105   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:17.326851   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:17.326906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:17.364760   64287 cri.go:89] found id: ""
	I1009 20:19:17.364785   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.364793   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:17.364799   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:17.364851   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:17.398149   64287 cri.go:89] found id: ""
	I1009 20:19:17.398172   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.398181   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:17.398189   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:17.398247   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:17.432746   64287 cri.go:89] found id: ""
	I1009 20:19:17.432778   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.432789   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:17.432797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:17.432846   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:17.468095   64287 cri.go:89] found id: ""
	I1009 20:19:17.468125   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.468137   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:17.468145   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:17.468206   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:17.503152   64287 cri.go:89] found id: ""
	I1009 20:19:17.503184   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.503196   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:17.503203   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:17.503257   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:17.543966   64287 cri.go:89] found id: ""
	I1009 20:19:17.543993   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.544002   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:17.544008   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:17.544077   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:17.582780   64287 cri.go:89] found id: ""
	I1009 20:19:17.582801   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.582809   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:17.582814   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:17.582860   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:17.621907   64287 cri.go:89] found id: ""
	I1009 20:19:17.621933   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.621942   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:17.621951   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:17.621963   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:17.674239   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:17.674271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:17.688301   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:17.688331   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:17.759965   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:17.759989   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:17.760005   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:17.836052   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:17.836087   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:18.880033   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:21.381550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.399116   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.898441   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:22.899243   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.581486   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:23.081145   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.380237   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:20.393343   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:20.393409   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:20.427462   64287 cri.go:89] found id: ""
	I1009 20:19:20.427491   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.427501   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:20.427509   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:20.427560   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:20.463708   64287 cri.go:89] found id: ""
	I1009 20:19:20.463736   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.463747   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:20.463754   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:20.463818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:20.497898   64287 cri.go:89] found id: ""
	I1009 20:19:20.497924   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.497931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:20.497937   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:20.497985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:20.531880   64287 cri.go:89] found id: ""
	I1009 20:19:20.531910   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.531918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:20.531923   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:20.531971   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:20.565291   64287 cri.go:89] found id: ""
	I1009 20:19:20.565319   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.565330   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:20.565342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:20.565390   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:20.604786   64287 cri.go:89] found id: ""
	I1009 20:19:20.604815   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.604827   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:20.604835   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:20.604891   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:20.646136   64287 cri.go:89] found id: ""
	I1009 20:19:20.646161   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.646169   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:20.646175   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:20.646231   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:20.687503   64287 cri.go:89] found id: ""
	I1009 20:19:20.687527   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.687540   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:20.687548   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:20.687560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:20.738026   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:20.738057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:20.751432   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:20.751459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:20.826192   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:20.826219   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:20.826239   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:20.905874   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:20.905900   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.445277   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:23.460245   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:23.460305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:23.503559   64287 cri.go:89] found id: ""
	I1009 20:19:23.503582   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.503590   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:23.503596   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:23.503652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:23.542748   64287 cri.go:89] found id: ""
	I1009 20:19:23.542783   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.542791   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:23.542797   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:23.542857   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:23.585668   64287 cri.go:89] found id: ""
	I1009 20:19:23.585689   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.585696   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:23.585702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:23.585753   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:23.623863   64287 cri.go:89] found id: ""
	I1009 20:19:23.623884   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.623891   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:23.623897   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:23.623952   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:23.657025   64287 cri.go:89] found id: ""
	I1009 20:19:23.657049   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.657057   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:23.657063   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:23.657120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:23.692536   64287 cri.go:89] found id: ""
	I1009 20:19:23.692573   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.692583   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:23.692590   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:23.692657   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:23.732552   64287 cri.go:89] found id: ""
	I1009 20:19:23.732580   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.732591   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:23.732599   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:23.732645   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:23.767308   64287 cri.go:89] found id: ""
	I1009 20:19:23.767345   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.767356   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:23.767366   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:23.767380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:23.780909   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:23.780948   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:23.853312   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:23.853340   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:23.853355   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:23.934930   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:23.934968   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.977906   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:23.977943   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:23.881669   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.380447   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.397833   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.398843   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.082071   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.580992   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.530146   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:26.545527   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:26.545598   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:26.580942   64287 cri.go:89] found id: ""
	I1009 20:19:26.580970   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.580981   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:26.580988   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:26.581050   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:26.621165   64287 cri.go:89] found id: ""
	I1009 20:19:26.621188   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.621195   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:26.621201   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:26.621245   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:26.655664   64287 cri.go:89] found id: ""
	I1009 20:19:26.655690   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.655697   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:26.655703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:26.655749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:26.691951   64287 cri.go:89] found id: ""
	I1009 20:19:26.691973   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.691981   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:26.691987   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:26.692033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:26.728905   64287 cri.go:89] found id: ""
	I1009 20:19:26.728937   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.728948   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:26.728955   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:26.729013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:26.763673   64287 cri.go:89] found id: ""
	I1009 20:19:26.763697   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.763705   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:26.763711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:26.763765   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:26.798507   64287 cri.go:89] found id: ""
	I1009 20:19:26.798535   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.798547   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:26.798554   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:26.798615   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:26.836114   64287 cri.go:89] found id: ""
	I1009 20:19:26.836140   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.836148   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:26.836156   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:26.836169   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:26.914136   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:26.914160   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:26.914175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:26.995023   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:26.995055   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:27.033788   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:27.033817   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:27.084313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:27.084341   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.597899   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:29.611695   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:29.611756   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:28.381564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.881085   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.899697   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.398514   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.081670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.580939   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.646690   64287 cri.go:89] found id: ""
	I1009 20:19:29.646718   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.646726   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:29.646732   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:29.646780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:29.681379   64287 cri.go:89] found id: ""
	I1009 20:19:29.681408   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.681418   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:29.681425   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:29.681481   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:29.717988   64287 cri.go:89] found id: ""
	I1009 20:19:29.718012   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.718020   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:29.718026   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:29.718076   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:29.752783   64287 cri.go:89] found id: ""
	I1009 20:19:29.752815   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.752825   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:29.752833   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:29.752883   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:29.786079   64287 cri.go:89] found id: ""
	I1009 20:19:29.786105   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.786114   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:29.786120   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:29.786167   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:29.820630   64287 cri.go:89] found id: ""
	I1009 20:19:29.820655   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.820663   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:29.820669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:29.820727   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:29.855992   64287 cri.go:89] found id: ""
	I1009 20:19:29.856022   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.856033   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:29.856040   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:29.856096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:29.891196   64287 cri.go:89] found id: ""
	I1009 20:19:29.891224   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.891234   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:29.891244   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:29.891257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:29.945636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:29.945665   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.959715   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:29.959741   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:30.034023   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:30.034046   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:30.034066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:30.109512   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:30.109545   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.651252   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:32.665196   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:32.665253   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:32.701468   64287 cri.go:89] found id: ""
	I1009 20:19:32.701497   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.701516   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:32.701525   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:32.701581   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:32.740585   64287 cri.go:89] found id: ""
	I1009 20:19:32.740611   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.740623   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:32.740629   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:32.740699   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:32.773765   64287 cri.go:89] found id: ""
	I1009 20:19:32.773792   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.773803   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:32.773810   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:32.773869   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:32.812647   64287 cri.go:89] found id: ""
	I1009 20:19:32.812680   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.812695   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:32.812702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:32.812752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:32.847044   64287 cri.go:89] found id: ""
	I1009 20:19:32.847092   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.847101   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:32.847107   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:32.847153   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:32.885410   64287 cri.go:89] found id: ""
	I1009 20:19:32.885439   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.885448   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:32.885455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:32.885515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:32.922917   64287 cri.go:89] found id: ""
	I1009 20:19:32.922944   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.922955   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:32.922963   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:32.923026   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:32.958993   64287 cri.go:89] found id: ""
	I1009 20:19:32.959019   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.959027   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:32.959037   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:32.959052   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.996844   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:32.996871   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:33.047684   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:33.047715   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:33.061829   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:33.061856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:33.135278   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:33.135302   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:33.135314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:33.380221   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.380648   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:34.897646   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:36.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.081326   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:37.580347   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.722479   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:35.736670   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:35.736745   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:35.778594   64287 cri.go:89] found id: ""
	I1009 20:19:35.778617   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.778625   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:35.778630   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:35.778677   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:35.810906   64287 cri.go:89] found id: ""
	I1009 20:19:35.810934   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.810945   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:35.810954   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:35.811014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:35.846226   64287 cri.go:89] found id: ""
	I1009 20:19:35.846258   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.846269   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:35.846277   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:35.846325   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:35.880509   64287 cri.go:89] found id: ""
	I1009 20:19:35.880536   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.880547   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:35.880555   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:35.880613   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:35.916039   64287 cri.go:89] found id: ""
	I1009 20:19:35.916067   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.916077   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:35.916085   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:35.916142   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:35.948068   64287 cri.go:89] found id: ""
	I1009 20:19:35.948099   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.948107   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:35.948113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:35.948168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:35.982531   64287 cri.go:89] found id: ""
	I1009 20:19:35.982556   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.982565   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:35.982571   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:35.982618   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:36.016284   64287 cri.go:89] found id: ""
	I1009 20:19:36.016307   64287 logs.go:282] 0 containers: []
	W1009 20:19:36.016314   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:36.016324   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:36.016333   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:36.096773   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:36.096807   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:36.135382   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:36.135408   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:36.189157   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:36.189189   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:36.202243   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:36.202272   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:36.289968   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:38.790894   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:38.804960   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:38.805020   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:38.840867   64287 cri.go:89] found id: ""
	I1009 20:19:38.840891   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.840898   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:38.840904   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:38.840961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:38.877659   64287 cri.go:89] found id: ""
	I1009 20:19:38.877686   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.877695   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:38.877709   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:38.877768   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:38.917914   64287 cri.go:89] found id: ""
	I1009 20:19:38.917938   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.917947   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:38.917954   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:38.918011   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:38.955879   64287 cri.go:89] found id: ""
	I1009 20:19:38.955907   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.955918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:38.955925   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:38.955985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:38.991683   64287 cri.go:89] found id: ""
	I1009 20:19:38.991712   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.991723   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:38.991730   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:38.991815   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:39.026167   64287 cri.go:89] found id: ""
	I1009 20:19:39.026192   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.026199   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:39.026205   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:39.026273   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:39.061646   64287 cri.go:89] found id: ""
	I1009 20:19:39.061676   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.061692   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:39.061699   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:39.061760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:39.097660   64287 cri.go:89] found id: ""
	I1009 20:19:39.097687   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.097696   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:39.097706   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:39.097720   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:39.149199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:39.149232   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:39.162366   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:39.162391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:39.237267   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:39.237295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:39.237310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:39.320531   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:39.320566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:37.882355   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:40.380792   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.381234   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:38.899362   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.397980   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:39.580565   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.081212   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.865807   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:41.880948   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:41.881015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:41.917675   64287 cri.go:89] found id: ""
	I1009 20:19:41.917703   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.917714   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:41.917722   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:41.917780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:41.957152   64287 cri.go:89] found id: ""
	I1009 20:19:41.957180   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.957189   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:41.957194   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:41.957250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:42.008129   64287 cri.go:89] found id: ""
	I1009 20:19:42.008153   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.008162   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:42.008170   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:42.008232   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:42.042628   64287 cri.go:89] found id: ""
	I1009 20:19:42.042651   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.042658   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:42.042669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:42.042712   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:42.080123   64287 cri.go:89] found id: ""
	I1009 20:19:42.080147   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.080155   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:42.080161   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:42.080214   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:42.120070   64287 cri.go:89] found id: ""
	I1009 20:19:42.120099   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.120108   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:42.120114   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:42.120161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:42.153686   64287 cri.go:89] found id: ""
	I1009 20:19:42.153717   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.153727   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:42.153735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:42.153805   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:42.187793   64287 cri.go:89] found id: ""
	I1009 20:19:42.187820   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.187832   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:42.187842   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:42.187856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:42.267510   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:42.267545   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:42.267559   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:42.348061   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:42.348095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:42.393407   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:42.393431   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:42.448547   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:42.448580   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:44.381312   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:46.881511   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:43.398743   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:45.398982   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.898041   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.081990   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.963603   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:44.977341   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:44.977417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:45.018729   64287 cri.go:89] found id: ""
	I1009 20:19:45.018756   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.018764   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:45.018770   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:45.018821   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:45.055232   64287 cri.go:89] found id: ""
	I1009 20:19:45.055259   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.055267   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:45.055273   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:45.055332   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:45.090575   64287 cri.go:89] found id: ""
	I1009 20:19:45.090604   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.090614   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:45.090620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:45.090692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:45.126426   64287 cri.go:89] found id: ""
	I1009 20:19:45.126452   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.126459   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:45.126465   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:45.126523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:45.166192   64287 cri.go:89] found id: ""
	I1009 20:19:45.166223   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.166232   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:45.166239   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:45.166301   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:45.200353   64287 cri.go:89] found id: ""
	I1009 20:19:45.200384   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.200400   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:45.200406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:45.200454   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:45.235696   64287 cri.go:89] found id: ""
	I1009 20:19:45.235729   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.235740   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:45.235747   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:45.235807   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:45.271937   64287 cri.go:89] found id: ""
	I1009 20:19:45.271969   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.271979   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:45.271990   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:45.272004   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:45.347600   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:45.347635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:45.392203   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:45.392229   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:45.444012   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:45.444045   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:45.458106   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:45.458130   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:45.540275   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.041410   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:48.057834   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:48.057889   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:48.094318   64287 cri.go:89] found id: ""
	I1009 20:19:48.094346   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.094355   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:48.094362   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:48.094406   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:48.129645   64287 cri.go:89] found id: ""
	I1009 20:19:48.129672   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.129683   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:48.129691   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:48.129743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:48.164423   64287 cri.go:89] found id: ""
	I1009 20:19:48.164446   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.164454   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:48.164460   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:48.164519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:48.197708   64287 cri.go:89] found id: ""
	I1009 20:19:48.197736   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.197745   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:48.197750   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:48.197796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:48.235885   64287 cri.go:89] found id: ""
	I1009 20:19:48.235913   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.235925   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:48.235931   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:48.235995   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:48.272458   64287 cri.go:89] found id: ""
	I1009 20:19:48.272492   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.272504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:48.272513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:48.272580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:48.307152   64287 cri.go:89] found id: ""
	I1009 20:19:48.307180   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.307190   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:48.307197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:48.307255   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:48.347335   64287 cri.go:89] found id: ""
	I1009 20:19:48.347366   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.347376   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:48.347387   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:48.347401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:48.418125   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:48.418161   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:48.433361   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:48.433386   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:48.524863   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.524879   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:48.524890   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:48.612196   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:48.612247   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:49.380735   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.898962   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.899005   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.581882   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.582193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.149683   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:51.164603   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:51.164663   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:51.197120   64287 cri.go:89] found id: ""
	I1009 20:19:51.197151   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.197162   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:51.197170   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:51.197228   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:51.233612   64287 cri.go:89] found id: ""
	I1009 20:19:51.233641   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.233651   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:51.233660   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:51.233726   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:51.267119   64287 cri.go:89] found id: ""
	I1009 20:19:51.267150   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.267159   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:51.267168   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:51.267233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:51.301816   64287 cri.go:89] found id: ""
	I1009 20:19:51.301845   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.301854   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:51.301859   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:51.301917   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:51.335483   64287 cri.go:89] found id: ""
	I1009 20:19:51.335524   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.335535   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:51.335543   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:51.335604   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:51.370207   64287 cri.go:89] found id: ""
	I1009 20:19:51.370241   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.370252   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:51.370258   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:51.370320   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:51.406925   64287 cri.go:89] found id: ""
	I1009 20:19:51.406949   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.406956   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:51.406962   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:51.407015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:51.446354   64287 cri.go:89] found id: ""
	I1009 20:19:51.446378   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.446386   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:51.446394   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:51.446405   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:51.496627   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:51.496657   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:51.509587   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:51.509610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:51.583276   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:51.583295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:51.583306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:51.661552   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:51.661584   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:54.202782   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:54.227761   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:54.227829   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:54.261338   64287 cri.go:89] found id: ""
	I1009 20:19:54.261366   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.261374   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:54.261381   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:54.261435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:54.300387   64287 cri.go:89] found id: ""
	I1009 20:19:54.300414   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.300424   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:54.300429   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:54.300485   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:54.339083   64287 cri.go:89] found id: ""
	I1009 20:19:54.339110   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.339122   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:54.339129   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:54.339180   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:54.374145   64287 cri.go:89] found id: ""
	I1009 20:19:54.374174   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.374182   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:54.374188   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:54.374240   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:54.411872   64287 cri.go:89] found id: ""
	I1009 20:19:54.411904   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.411918   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:54.411926   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:54.411992   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:54.449459   64287 cri.go:89] found id: ""
	I1009 20:19:54.449493   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.449504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:54.449512   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:54.449575   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:54.482728   64287 cri.go:89] found id: ""
	I1009 20:19:54.482752   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.482762   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:54.482770   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:54.482830   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:54.516220   64287 cri.go:89] found id: ""
	I1009 20:19:54.516252   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.516261   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:54.516270   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:54.516280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:54.569531   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:54.569560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:54.583371   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:54.583395   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:53.880843   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.381025   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.399599   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.399727   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.080838   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.081451   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:54.651718   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:54.651742   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:54.651757   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:54.728869   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:54.728903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.270702   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:57.284287   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:57.284351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:57.317235   64287 cri.go:89] found id: ""
	I1009 20:19:57.317269   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.317279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:57.317290   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:57.317349   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:57.350030   64287 cri.go:89] found id: ""
	I1009 20:19:57.350058   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.350066   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:57.350071   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:57.350118   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:57.382840   64287 cri.go:89] found id: ""
	I1009 20:19:57.382867   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.382877   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:57.382884   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:57.382935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:57.417193   64287 cri.go:89] found id: ""
	I1009 20:19:57.417229   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.417239   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:57.417247   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:57.417309   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:57.456417   64287 cri.go:89] found id: ""
	I1009 20:19:57.456445   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.456454   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:57.456461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:57.456523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:57.490156   64287 cri.go:89] found id: ""
	I1009 20:19:57.490185   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.490193   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:57.490199   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:57.490246   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:57.523983   64287 cri.go:89] found id: ""
	I1009 20:19:57.524013   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.524023   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:57.524030   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:57.524093   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:57.562288   64287 cri.go:89] found id: ""
	I1009 20:19:57.562317   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.562325   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:57.562334   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:57.562345   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.602475   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:57.602502   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:57.656636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:57.656668   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:57.670738   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:57.670765   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:57.742943   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:57.742968   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:57.742979   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:58.384537   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.881670   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.897654   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.899099   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:02.899381   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.581059   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:01.081778   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.321926   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:00.335475   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:00.335546   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:00.369727   64287 cri.go:89] found id: ""
	I1009 20:20:00.369762   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.369770   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:00.369776   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:00.369823   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:00.408917   64287 cri.go:89] found id: ""
	I1009 20:20:00.408943   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.408953   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:00.408964   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:00.409013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:00.447646   64287 cri.go:89] found id: ""
	I1009 20:20:00.447676   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.447687   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:00.447694   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:00.447754   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:00.485752   64287 cri.go:89] found id: ""
	I1009 20:20:00.485780   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.485790   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:00.485797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:00.485859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:00.519568   64287 cri.go:89] found id: ""
	I1009 20:20:00.519592   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.519600   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:00.519606   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:00.519667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:00.553288   64287 cri.go:89] found id: ""
	I1009 20:20:00.553323   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.553334   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:00.553342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:00.553402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:00.593842   64287 cri.go:89] found id: ""
	I1009 20:20:00.593868   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.593875   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:00.593882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:00.593938   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:00.630808   64287 cri.go:89] found id: ""
	I1009 20:20:00.630839   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.630849   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:00.630859   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:00.630873   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:00.681858   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:00.681888   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:00.695365   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:00.695391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:00.768651   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:00.768679   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:00.768693   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:00.843999   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:00.844034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.390483   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:03.405406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:03.405476   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:03.440025   64287 cri.go:89] found id: ""
	I1009 20:20:03.440048   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.440055   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:03.440061   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:03.440113   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:03.475407   64287 cri.go:89] found id: ""
	I1009 20:20:03.475440   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.475450   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:03.475456   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:03.475511   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:03.512656   64287 cri.go:89] found id: ""
	I1009 20:20:03.512680   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.512688   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:03.512693   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:03.512749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:03.549174   64287 cri.go:89] found id: ""
	I1009 20:20:03.549204   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.549212   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:03.549217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:03.549282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:03.586093   64287 cri.go:89] found id: ""
	I1009 20:20:03.586118   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.586128   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:03.586135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:03.586201   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:03.624221   64287 cri.go:89] found id: ""
	I1009 20:20:03.624248   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.624258   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:03.624271   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:03.624342   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:03.658759   64287 cri.go:89] found id: ""
	I1009 20:20:03.658781   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.658789   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:03.658794   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:03.658850   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:03.692200   64287 cri.go:89] found id: ""
	I1009 20:20:03.692227   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.692237   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:03.692247   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:03.692263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:03.745949   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:03.745985   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:03.759691   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:03.759724   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:03.833000   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:03.833020   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:03.833034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:03.911321   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:03.911352   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.381014   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.881096   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:04.900780   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:07.398348   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:03.580442   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.582159   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:08.080528   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:06.451158   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:06.466356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:06.466435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:06.502907   64287 cri.go:89] found id: ""
	I1009 20:20:06.502936   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.502944   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:06.502950   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:06.503000   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:06.540938   64287 cri.go:89] found id: ""
	I1009 20:20:06.540961   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.540969   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:06.540974   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:06.541033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:06.575587   64287 cri.go:89] found id: ""
	I1009 20:20:06.575616   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.575632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:06.575640   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:06.575696   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:06.611052   64287 cri.go:89] found id: ""
	I1009 20:20:06.611093   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.611103   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:06.611110   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:06.611170   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:06.647763   64287 cri.go:89] found id: ""
	I1009 20:20:06.647793   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.647804   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:06.647811   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:06.647876   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:06.682423   64287 cri.go:89] found id: ""
	I1009 20:20:06.682449   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.682460   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:06.682471   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:06.682541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:06.718096   64287 cri.go:89] found id: ""
	I1009 20:20:06.718124   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.718135   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:06.718141   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:06.718200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:06.753320   64287 cri.go:89] found id: ""
	I1009 20:20:06.753344   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.753353   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:06.753361   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:06.753375   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:06.809610   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:06.809640   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:06.823651   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:06.823680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:06.895796   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:06.895819   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:06.895833   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:06.972602   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:06.972635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:09.513909   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:09.527143   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:09.527254   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:09.560406   64287 cri.go:89] found id: ""
	I1009 20:20:09.560432   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.560440   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:09.560445   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:09.560493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:09.600180   64287 cri.go:89] found id: ""
	I1009 20:20:09.600202   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.600219   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:09.600225   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:09.600285   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:08.380652   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.880056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.398968   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:11.897696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.081007   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:12.081291   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.638356   64287 cri.go:89] found id: ""
	I1009 20:20:09.638383   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.638393   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:09.638398   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:09.638450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:09.680589   64287 cri.go:89] found id: ""
	I1009 20:20:09.680616   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.680627   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:09.680635   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:09.680686   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:09.719018   64287 cri.go:89] found id: ""
	I1009 20:20:09.719041   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.719049   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:09.719054   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:09.719129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:09.757262   64287 cri.go:89] found id: ""
	I1009 20:20:09.757290   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.757298   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:09.757305   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:09.757364   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:09.796127   64287 cri.go:89] found id: ""
	I1009 20:20:09.796157   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.796168   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:09.796176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:09.796236   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:09.830650   64287 cri.go:89] found id: ""
	I1009 20:20:09.830679   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.830689   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:09.830699   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:09.830713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:09.882638   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:09.882666   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:09.897458   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:09.897488   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:09.964440   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:09.964462   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:09.964473   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:10.040103   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:10.040138   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.590159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:12.603380   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:12.603448   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:12.636246   64287 cri.go:89] found id: ""
	I1009 20:20:12.636272   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.636281   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:12.636288   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:12.636392   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:12.669400   64287 cri.go:89] found id: ""
	I1009 20:20:12.669429   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.669439   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:12.669446   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:12.669493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:12.705076   64287 cri.go:89] found id: ""
	I1009 20:20:12.705104   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.705114   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:12.705122   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:12.705198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:12.738883   64287 cri.go:89] found id: ""
	I1009 20:20:12.738914   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.738926   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:12.738933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:12.738988   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:12.773549   64287 cri.go:89] found id: ""
	I1009 20:20:12.773572   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.773580   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:12.773592   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:12.773709   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:12.813123   64287 cri.go:89] found id: ""
	I1009 20:20:12.813148   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.813156   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:12.813162   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:12.813215   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:12.851272   64287 cri.go:89] found id: ""
	I1009 20:20:12.851305   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.851317   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:12.851325   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:12.851389   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:12.891399   64287 cri.go:89] found id: ""
	I1009 20:20:12.891422   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.891429   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:12.891436   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:12.891455   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:12.945839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:12.945868   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:12.959711   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:12.959735   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:13.028015   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:13.028034   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:13.028048   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:13.108451   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:13.108491   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.881443   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.381891   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.398650   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.401925   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.580306   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.580836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.651166   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:15.664618   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:15.664692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:15.697088   64287 cri.go:89] found id: ""
	I1009 20:20:15.697117   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.697127   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:15.697137   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:15.697198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:15.738641   64287 cri.go:89] found id: ""
	I1009 20:20:15.738671   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.738682   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:15.738690   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:15.738747   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:15.771293   64287 cri.go:89] found id: ""
	I1009 20:20:15.771318   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.771326   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:15.771332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:15.771391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:15.804234   64287 cri.go:89] found id: ""
	I1009 20:20:15.804263   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.804271   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:15.804279   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:15.804329   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:15.840914   64287 cri.go:89] found id: ""
	I1009 20:20:15.840964   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.840975   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:15.840983   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:15.841041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:15.878243   64287 cri.go:89] found id: ""
	I1009 20:20:15.878270   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.878280   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:15.878288   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:15.878344   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:15.917371   64287 cri.go:89] found id: ""
	I1009 20:20:15.917398   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.917409   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:15.917416   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:15.917473   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:15.951443   64287 cri.go:89] found id: ""
	I1009 20:20:15.951470   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.951481   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:15.951490   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:15.951504   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:16.017601   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:16.017629   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:16.017643   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:16.095915   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:16.095946   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:16.141704   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:16.141737   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:16.197391   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:16.197424   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:18.712278   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:18.725451   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:18.725513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:18.757618   64287 cri.go:89] found id: ""
	I1009 20:20:18.757640   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.757650   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:18.757657   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:18.757715   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:18.791651   64287 cri.go:89] found id: ""
	I1009 20:20:18.791677   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.791686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:18.791693   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:18.791750   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:18.826402   64287 cri.go:89] found id: ""
	I1009 20:20:18.826430   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.826440   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:18.826449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:18.826522   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:18.868610   64287 cri.go:89] found id: ""
	I1009 20:20:18.868634   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.868644   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:18.868652   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:18.868710   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:18.905499   64287 cri.go:89] found id: ""
	I1009 20:20:18.905520   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.905527   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:18.905532   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:18.905588   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:18.938772   64287 cri.go:89] found id: ""
	I1009 20:20:18.938795   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.938803   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:18.938809   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:18.938855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:18.974712   64287 cri.go:89] found id: ""
	I1009 20:20:18.974742   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.974753   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:18.974760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:18.974820   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:19.008681   64287 cri.go:89] found id: ""
	I1009 20:20:19.008710   64287 logs.go:282] 0 containers: []
	W1009 20:20:19.008718   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:19.008726   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:19.008736   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:19.059862   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:19.059891   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:19.073071   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:19.073096   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:19.142163   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:19.142189   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:19.142204   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:19.226645   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:19.226691   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:17.880874   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.881553   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:18.898733   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:20.899569   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.081883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.581532   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.767167   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:21.780448   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:21.780530   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:21.813670   64287 cri.go:89] found id: ""
	I1009 20:20:21.813699   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.813708   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:21.813714   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:21.813760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:21.850793   64287 cri.go:89] found id: ""
	I1009 20:20:21.850826   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.850838   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:21.850845   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:21.850904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:21.887886   64287 cri.go:89] found id: ""
	I1009 20:20:21.887919   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.887931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:21.887938   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:21.887987   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:21.926620   64287 cri.go:89] found id: ""
	I1009 20:20:21.926651   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.926661   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:21.926669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:21.926734   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:21.962822   64287 cri.go:89] found id: ""
	I1009 20:20:21.962859   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.962867   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:21.962872   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:21.962932   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:22.001043   64287 cri.go:89] found id: ""
	I1009 20:20:22.001070   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.001080   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:22.001088   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:22.001145   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:22.034111   64287 cri.go:89] found id: ""
	I1009 20:20:22.034139   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.034148   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:22.034153   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:22.034200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:22.067601   64287 cri.go:89] found id: ""
	I1009 20:20:22.067629   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.067640   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:22.067649   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:22.067663   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:22.081545   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:22.081575   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:22.158725   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:22.158749   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:22.158761   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:22.249086   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:22.249133   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:22.287435   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:22.287462   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:24.380294   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.880564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:23.398659   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:25.399216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:27.898475   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.580818   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.838935   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:24.852057   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:24.852126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:24.887454   64287 cri.go:89] found id: ""
	I1009 20:20:24.887488   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.887500   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:24.887507   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:24.887565   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:24.928273   64287 cri.go:89] found id: ""
	I1009 20:20:24.928295   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.928303   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:24.928309   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:24.928367   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:24.962116   64287 cri.go:89] found id: ""
	I1009 20:20:24.962152   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.962164   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:24.962172   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:24.962252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:24.996909   64287 cri.go:89] found id: ""
	I1009 20:20:24.996934   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.996942   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:24.996947   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:24.996996   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:25.030615   64287 cri.go:89] found id: ""
	I1009 20:20:25.030647   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.030658   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:25.030665   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:25.030725   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:25.066069   64287 cri.go:89] found id: ""
	I1009 20:20:25.066096   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.066104   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:25.066109   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:25.066158   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:25.101762   64287 cri.go:89] found id: ""
	I1009 20:20:25.101791   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.101799   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:25.101807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:25.101854   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:25.139704   64287 cri.go:89] found id: ""
	I1009 20:20:25.139730   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.139738   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:25.139745   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:25.139756   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:25.190212   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:25.190257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:25.206181   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:25.206206   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:25.276523   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:25.276548   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:25.276562   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:25.352477   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:25.352509   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:27.894112   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:27.907965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:27.908018   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:27.942933   64287 cri.go:89] found id: ""
	I1009 20:20:27.942959   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.942967   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:27.942973   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:27.943029   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:27.995890   64287 cri.go:89] found id: ""
	I1009 20:20:27.995917   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.995929   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:27.995936   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:27.995985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:28.031877   64287 cri.go:89] found id: ""
	I1009 20:20:28.031904   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.031914   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:28.031922   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:28.031975   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:28.073691   64287 cri.go:89] found id: ""
	I1009 20:20:28.073720   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.073730   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:28.073738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:28.073796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:28.109946   64287 cri.go:89] found id: ""
	I1009 20:20:28.109975   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.109987   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:28.109995   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:28.110041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:28.144771   64287 cri.go:89] found id: ""
	I1009 20:20:28.144801   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.144822   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:28.144830   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:28.144892   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:28.179617   64287 cri.go:89] found id: ""
	I1009 20:20:28.179640   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.179647   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:28.179653   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:28.179698   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:28.213734   64287 cri.go:89] found id: ""
	I1009 20:20:28.213759   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.213767   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:28.213775   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:28.213787   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:28.227778   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:28.227803   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:28.298025   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:28.298057   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:28.298071   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:28.378664   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:28.378700   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:28.417577   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:28.417602   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:29.380480   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.382239   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.396952   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:32.399211   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:29.079718   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.083332   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.968360   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:30.981229   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:30.981295   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:31.013373   64287 cri.go:89] found id: ""
	I1009 20:20:31.013397   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.013408   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:31.013415   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:31.013468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:31.044387   64287 cri.go:89] found id: ""
	I1009 20:20:31.044408   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.044416   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:31.044421   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:31.044490   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:31.079677   64287 cri.go:89] found id: ""
	I1009 20:20:31.079702   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.079718   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:31.079727   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:31.079788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:31.118895   64287 cri.go:89] found id: ""
	I1009 20:20:31.118921   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.118933   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:31.118940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:31.118997   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:31.157008   64287 cri.go:89] found id: ""
	I1009 20:20:31.157035   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.157043   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:31.157049   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:31.157096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:31.188999   64287 cri.go:89] found id: ""
	I1009 20:20:31.189024   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.189032   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:31.189038   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:31.189095   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:31.225314   64287 cri.go:89] found id: ""
	I1009 20:20:31.225341   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.225351   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:31.225359   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:31.225426   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:31.259864   64287 cri.go:89] found id: ""
	I1009 20:20:31.259891   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.259899   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:31.259907   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:31.259918   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:31.333579   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:31.333615   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:31.375852   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:31.375884   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:31.428346   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:31.428377   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:31.442927   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:31.442951   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:31.512924   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:34.013346   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:34.026671   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:34.026729   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:34.062445   64287 cri.go:89] found id: ""
	I1009 20:20:34.062469   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.062479   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:34.062487   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:34.062586   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:34.096670   64287 cri.go:89] found id: ""
	I1009 20:20:34.096692   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.096699   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:34.096705   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:34.096752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:34.133653   64287 cri.go:89] found id: ""
	I1009 20:20:34.133682   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.133702   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:34.133711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:34.133770   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:34.167514   64287 cri.go:89] found id: ""
	I1009 20:20:34.167541   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.167552   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:34.167560   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:34.167631   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:34.200397   64287 cri.go:89] found id: ""
	I1009 20:20:34.200427   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.200438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:34.200446   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:34.200504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:34.236507   64287 cri.go:89] found id: ""
	I1009 20:20:34.236534   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.236544   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:34.236551   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:34.236611   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:34.272611   64287 cri.go:89] found id: ""
	I1009 20:20:34.272639   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.272650   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:34.272658   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:34.272733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:34.311392   64287 cri.go:89] found id: ""
	I1009 20:20:34.311417   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.311426   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:34.311434   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:34.311445   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:34.401718   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:34.401751   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:34.463768   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:34.463798   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:34.526313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:34.526347   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:34.540370   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:34.540401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:34.610697   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:33.880836   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:35.881010   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:34.399526   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.401486   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:33.581544   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.080875   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.085744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:37.111821   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:37.125012   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:37.125073   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:37.165105   64287 cri.go:89] found id: ""
	I1009 20:20:37.165135   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.165144   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:37.165151   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:37.165217   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:37.201367   64287 cri.go:89] found id: ""
	I1009 20:20:37.201393   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.201403   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:37.201412   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:37.201470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:37.234258   64287 cri.go:89] found id: ""
	I1009 20:20:37.234283   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.234291   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:37.234297   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:37.234351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:37.270765   64287 cri.go:89] found id: ""
	I1009 20:20:37.270790   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.270798   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:37.270803   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:37.270855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:37.303931   64287 cri.go:89] found id: ""
	I1009 20:20:37.303962   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.303970   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:37.303976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:37.304035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:37.339438   64287 cri.go:89] found id: ""
	I1009 20:20:37.339466   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.339476   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:37.339484   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:37.339544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:37.371538   64287 cri.go:89] found id: ""
	I1009 20:20:37.371565   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.371576   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:37.371584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:37.371644   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:37.414729   64287 cri.go:89] found id: ""
	I1009 20:20:37.414775   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.414785   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:37.414803   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:37.414818   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:37.453989   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:37.454013   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:37.504516   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:37.504551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:37.520317   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:37.520353   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:37.590144   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:37.590163   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:37.590175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:38.381407   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.381518   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.897837   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.897916   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.898202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.582744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.167604   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:40.191718   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:40.191788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:40.247439   64287 cri.go:89] found id: ""
	I1009 20:20:40.247467   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.247475   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:40.247482   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:40.247549   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:40.284012   64287 cri.go:89] found id: ""
	I1009 20:20:40.284043   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.284055   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:40.284063   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:40.284124   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:40.321347   64287 cri.go:89] found id: ""
	I1009 20:20:40.321378   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.321386   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:40.321391   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:40.321456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:40.364063   64287 cri.go:89] found id: ""
	I1009 20:20:40.364084   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.364092   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:40.364098   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:40.364152   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:40.400423   64287 cri.go:89] found id: ""
	I1009 20:20:40.400449   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.400458   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:40.400467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:40.400525   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:40.434538   64287 cri.go:89] found id: ""
	I1009 20:20:40.434567   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.434576   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:40.434584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:40.434647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:40.468860   64287 cri.go:89] found id: ""
	I1009 20:20:40.468909   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.468921   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:40.468928   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:40.468990   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:40.501583   64287 cri.go:89] found id: ""
	I1009 20:20:40.501607   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.501615   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:40.501624   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:40.501639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:40.558878   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:40.558919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:40.573191   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:40.573218   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:40.640959   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:40.640980   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:40.640996   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:40.716475   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:40.716510   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.255685   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:43.269113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:43.269182   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:43.305892   64287 cri.go:89] found id: ""
	I1009 20:20:43.305920   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.305931   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:43.305939   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:43.305999   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:43.341486   64287 cri.go:89] found id: ""
	I1009 20:20:43.341515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.341525   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:43.341532   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:43.341592   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:43.375473   64287 cri.go:89] found id: ""
	I1009 20:20:43.375496   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.375506   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:43.375513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:43.375577   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:43.411235   64287 cri.go:89] found id: ""
	I1009 20:20:43.411259   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.411268   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:43.411274   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:43.411330   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:43.444884   64287 cri.go:89] found id: ""
	I1009 20:20:43.444914   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.444926   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:43.444933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:43.444993   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:43.479151   64287 cri.go:89] found id: ""
	I1009 20:20:43.479177   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.479187   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:43.479195   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:43.479261   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:43.512485   64287 cri.go:89] found id: ""
	I1009 20:20:43.512515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.512523   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:43.512530   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:43.512580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:43.546511   64287 cri.go:89] found id: ""
	I1009 20:20:43.546533   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.546541   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:43.546549   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:43.546561   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:43.623938   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:43.623970   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.667655   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:43.667680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:43.724747   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:43.724778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:43.740060   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:43.740081   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:43.820910   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:42.880030   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:44.880596   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.880640   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.399270   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.899013   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.081796   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.580573   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.321796   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:46.337028   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:46.337086   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:46.374564   64287 cri.go:89] found id: ""
	I1009 20:20:46.374587   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.374595   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:46.374601   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:46.374662   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:46.411418   64287 cri.go:89] found id: ""
	I1009 20:20:46.411453   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.411470   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:46.411477   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:46.411535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:46.447726   64287 cri.go:89] found id: ""
	I1009 20:20:46.447750   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.447758   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:46.447763   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:46.447818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:46.484691   64287 cri.go:89] found id: ""
	I1009 20:20:46.484721   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.484731   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:46.484738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:46.484799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:46.525017   64287 cri.go:89] found id: ""
	I1009 20:20:46.525052   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.525064   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:46.525071   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:46.525129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:46.562306   64287 cri.go:89] found id: ""
	I1009 20:20:46.562334   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.562342   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:46.562350   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:46.562417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:46.598067   64287 cri.go:89] found id: ""
	I1009 20:20:46.598099   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.598110   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:46.598117   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:46.598179   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:46.639484   64287 cri.go:89] found id: ""
	I1009 20:20:46.639515   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.639526   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:46.639537   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:46.639551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:46.694106   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:46.694140   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:46.709475   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:46.709501   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:46.781281   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:46.781308   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:46.781322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:46.862224   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:46.862262   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:49.402786   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:49.417432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:49.417537   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:49.454253   64287 cri.go:89] found id: ""
	I1009 20:20:49.454286   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.454296   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:49.454305   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:49.454366   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:49.490198   64287 cri.go:89] found id: ""
	I1009 20:20:49.490223   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.490234   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:49.490241   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:49.490307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:49.524286   64287 cri.go:89] found id: ""
	I1009 20:20:49.524312   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.524322   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:49.524330   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:49.524388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:49.566415   64287 cri.go:89] found id: ""
	I1009 20:20:49.566444   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.566455   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:49.566462   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:49.566529   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:49.604306   64287 cri.go:89] found id: ""
	I1009 20:20:49.604335   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.604346   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:49.604353   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:49.604414   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:48.880756   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:51.381546   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:50.398989   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.399159   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.581256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.081420   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.638514   64287 cri.go:89] found id: ""
	I1009 20:20:49.638543   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.638560   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:49.638568   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:49.638630   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:49.672158   64287 cri.go:89] found id: ""
	I1009 20:20:49.672182   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.672191   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:49.672197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:49.672250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:49.709865   64287 cri.go:89] found id: ""
	I1009 20:20:49.709887   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.709897   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:49.709907   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:49.709919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:49.762184   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:49.762220   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:49.775852   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:49.775880   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:49.850309   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:49.850329   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:49.850343   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:49.930225   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:49.930266   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:52.470580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:52.484087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:52.484141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:52.517440   64287 cri.go:89] found id: ""
	I1009 20:20:52.517461   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.517469   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:52.517475   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:52.517519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:52.550340   64287 cri.go:89] found id: ""
	I1009 20:20:52.550380   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.550392   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:52.550399   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:52.550468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:52.586444   64287 cri.go:89] found id: ""
	I1009 20:20:52.586478   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.586488   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:52.586495   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:52.586551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:52.620461   64287 cri.go:89] found id: ""
	I1009 20:20:52.620488   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.620499   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:52.620506   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:52.620566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:52.656032   64287 cri.go:89] found id: ""
	I1009 20:20:52.656063   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.656074   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:52.656082   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:52.656144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:52.687083   64287 cri.go:89] found id: ""
	I1009 20:20:52.687110   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.687118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:52.687124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:52.687187   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:52.723413   64287 cri.go:89] found id: ""
	I1009 20:20:52.723442   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.723453   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:52.723461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:52.723521   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:52.754656   64287 cri.go:89] found id: ""
	I1009 20:20:52.754687   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.754698   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:52.754709   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:52.754721   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:52.807359   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:52.807398   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:52.821469   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:52.821500   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:52.893447   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:52.893470   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:52.893484   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:52.970051   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:52.970083   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:53.880365   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.881762   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.898472   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:57.397863   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.580495   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:56.581092   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.508078   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:55.521951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:55.522012   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:55.556291   64287 cri.go:89] found id: ""
	I1009 20:20:55.556316   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.556324   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:55.556329   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:55.556380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:55.591032   64287 cri.go:89] found id: ""
	I1009 20:20:55.591059   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.591079   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:55.591086   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:55.591141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:55.636196   64287 cri.go:89] found id: ""
	I1009 20:20:55.636228   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.636239   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:55.636246   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:55.636310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:55.673291   64287 cri.go:89] found id: ""
	I1009 20:20:55.673313   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.673321   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:55.673327   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:55.673374   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:55.709457   64287 cri.go:89] found id: ""
	I1009 20:20:55.709486   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.709497   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:55.709504   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:55.709563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:55.748391   64287 cri.go:89] found id: ""
	I1009 20:20:55.748423   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.748434   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:55.748442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:55.748503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:55.780581   64287 cri.go:89] found id: ""
	I1009 20:20:55.780610   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.780620   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:55.780627   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:55.780688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:55.816489   64287 cri.go:89] found id: ""
	I1009 20:20:55.816527   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.816535   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:55.816554   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:55.816568   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:55.871679   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:55.871708   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:55.887895   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:55.887920   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:55.956814   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:55.956838   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:55.956850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:56.031453   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:56.031489   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.569098   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:58.583558   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:58.583626   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:58.622296   64287 cri.go:89] found id: ""
	I1009 20:20:58.622326   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.622334   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:58.622340   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:58.622401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:58.663776   64287 cri.go:89] found id: ""
	I1009 20:20:58.663798   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.663806   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:58.663812   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:58.663858   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:58.699968   64287 cri.go:89] found id: ""
	I1009 20:20:58.699994   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.700002   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:58.700007   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:58.700066   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:58.733935   64287 cri.go:89] found id: ""
	I1009 20:20:58.733959   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.733968   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:58.733974   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:58.734030   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:58.768723   64287 cri.go:89] found id: ""
	I1009 20:20:58.768752   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.768763   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:58.768771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:58.768834   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:58.803129   64287 cri.go:89] found id: ""
	I1009 20:20:58.803153   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.803161   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:58.803166   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:58.803237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:58.836341   64287 cri.go:89] found id: ""
	I1009 20:20:58.836366   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.836374   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:58.836379   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:58.836437   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:58.872048   64287 cri.go:89] found id: ""
	I1009 20:20:58.872071   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.872081   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:58.872091   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:58.872106   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:58.950133   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:58.950167   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.988529   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:58.988555   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:59.038377   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:59.038414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:59.053398   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:59.053448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:59.120793   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:58.380051   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:00.380182   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:59.398592   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.898382   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:58.581266   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.081525   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.621691   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:01.634505   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:01.634563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:01.670785   64287 cri.go:89] found id: ""
	I1009 20:21:01.670818   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.670826   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:01.670833   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:01.670897   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:01.712219   64287 cri.go:89] found id: ""
	I1009 20:21:01.712243   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.712255   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:01.712261   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:01.712307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:01.747175   64287 cri.go:89] found id: ""
	I1009 20:21:01.747204   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.747215   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:01.747222   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:01.747282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:01.785359   64287 cri.go:89] found id: ""
	I1009 20:21:01.785382   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.785389   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:01.785396   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:01.785452   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:01.822385   64287 cri.go:89] found id: ""
	I1009 20:21:01.822415   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.822426   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:01.822433   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:01.822501   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:01.860839   64287 cri.go:89] found id: ""
	I1009 20:21:01.860871   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.860880   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:01.860889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:01.860935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:01.899191   64287 cri.go:89] found id: ""
	I1009 20:21:01.899215   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.899224   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:01.899232   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:01.899288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:01.936692   64287 cri.go:89] found id: ""
	I1009 20:21:01.936721   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.936729   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:01.936737   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:01.936748   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:02.014848   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:02.014883   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:02.058815   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:02.058846   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:02.110513   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:02.110543   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:02.123855   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:02.123878   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:02.193997   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:02.880277   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.881247   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:07.380330   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.398320   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.580574   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.080382   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.081294   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.694766   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:04.707675   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:04.707743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:04.741322   64287 cri.go:89] found id: ""
	I1009 20:21:04.741354   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.741365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:04.741374   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:04.741435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:04.780649   64287 cri.go:89] found id: ""
	I1009 20:21:04.780676   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.780686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:04.780694   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:04.780749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:04.817514   64287 cri.go:89] found id: ""
	I1009 20:21:04.817545   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.817557   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:04.817564   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:04.817672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:04.850848   64287 cri.go:89] found id: ""
	I1009 20:21:04.850871   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.850878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:04.850885   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:04.850942   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:04.885390   64287 cri.go:89] found id: ""
	I1009 20:21:04.885426   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.885438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:04.885449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:04.885513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:04.920199   64287 cri.go:89] found id: ""
	I1009 20:21:04.920221   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.920229   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:04.920235   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:04.920307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:04.954597   64287 cri.go:89] found id: ""
	I1009 20:21:04.954619   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.954627   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:04.954634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:04.954693   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:04.988236   64287 cri.go:89] found id: ""
	I1009 20:21:04.988262   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.988270   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:04.988278   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:04.988289   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:05.039909   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:05.039939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:05.053556   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:05.053583   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:05.126596   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:05.126618   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:05.126628   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:05.202275   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:05.202309   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:07.740836   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:07.754095   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:07.754165   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:07.786584   64287 cri.go:89] found id: ""
	I1009 20:21:07.786613   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.786621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:07.786627   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:07.786672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:07.822365   64287 cri.go:89] found id: ""
	I1009 20:21:07.822388   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.822396   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:07.822410   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:07.822456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:07.858058   64287 cri.go:89] found id: ""
	I1009 20:21:07.858083   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.858093   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:07.858100   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:07.858156   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:07.894319   64287 cri.go:89] found id: ""
	I1009 20:21:07.894345   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.894352   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:07.894358   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:07.894422   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:07.928620   64287 cri.go:89] found id: ""
	I1009 20:21:07.928648   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.928659   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:07.928667   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:07.928724   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:07.964923   64287 cri.go:89] found id: ""
	I1009 20:21:07.964956   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.964967   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:07.964976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:07.965035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:07.998308   64287 cri.go:89] found id: ""
	I1009 20:21:07.998336   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.998347   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:07.998354   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:07.998402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:08.032021   64287 cri.go:89] found id: ""
	I1009 20:21:08.032047   64287 logs.go:282] 0 containers: []
	W1009 20:21:08.032059   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:08.032070   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:08.032084   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:08.103843   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:08.103867   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:08.103882   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:08.185476   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:08.185507   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:08.226967   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:08.226994   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:08.304852   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:08.304887   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:09.389127   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:11.880856   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.399153   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.399356   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:12.897624   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.581193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:13.082124   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.819345   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:10.832902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:10.832963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:10.873237   64287 cri.go:89] found id: ""
	I1009 20:21:10.873268   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.873279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:10.873286   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:10.873350   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:10.907296   64287 cri.go:89] found id: ""
	I1009 20:21:10.907316   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.907324   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:10.907329   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:10.907377   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:10.946428   64287 cri.go:89] found id: ""
	I1009 20:21:10.946469   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.946481   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:10.946487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:10.946540   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:10.982175   64287 cri.go:89] found id: ""
	I1009 20:21:10.982199   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.982207   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:10.982212   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:10.982259   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:11.016197   64287 cri.go:89] found id: ""
	I1009 20:21:11.016220   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.016243   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:11.016250   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:11.016318   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:11.055697   64287 cri.go:89] found id: ""
	I1009 20:21:11.055723   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.055732   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:11.055740   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:11.055806   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:11.093444   64287 cri.go:89] found id: ""
	I1009 20:21:11.093469   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.093480   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:11.093487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:11.093548   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:11.133224   64287 cri.go:89] found id: ""
	I1009 20:21:11.133252   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.133266   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:11.133276   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:11.133291   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:11.189020   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:11.189057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:11.202652   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:11.202682   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:11.272789   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:11.272811   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:11.272824   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:11.354868   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:11.354904   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:13.896655   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:13.910126   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:13.910189   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:13.944472   64287 cri.go:89] found id: ""
	I1009 20:21:13.944497   64287 logs.go:282] 0 containers: []
	W1009 20:21:13.944505   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:13.944511   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:13.944566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:14.003362   64287 cri.go:89] found id: ""
	I1009 20:21:14.003387   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.003397   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:14.003407   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:14.003470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:14.037691   64287 cri.go:89] found id: ""
	I1009 20:21:14.037717   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.037726   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:14.037732   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:14.037792   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:14.079333   64287 cri.go:89] found id: ""
	I1009 20:21:14.079358   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.079368   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:14.079375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:14.079433   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:14.120821   64287 cri.go:89] found id: ""
	I1009 20:21:14.120843   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.120851   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:14.120857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:14.120904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:14.161089   64287 cri.go:89] found id: ""
	I1009 20:21:14.161118   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.161128   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:14.161135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:14.161193   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:14.201711   64287 cri.go:89] found id: ""
	I1009 20:21:14.201739   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.201748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:14.201756   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:14.201814   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:14.238469   64287 cri.go:89] found id: ""
	I1009 20:21:14.238502   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.238512   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:14.238520   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:14.238531   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:14.289786   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:14.289821   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:14.303876   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:14.303903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:14.376426   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:14.376446   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:14.376459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:14.458058   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:14.458095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:14.381278   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:16.381782   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:14.899834   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.398309   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:15.580946   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.574819   63744 pod_ready.go:82] duration metric: took 4m0.000292386s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:17.574851   63744 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:17.574882   63744 pod_ready.go:39] duration metric: took 4m14.424118915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:17.574914   63744 kubeadm.go:597] duration metric: took 4m22.465328757s to restartPrimaryControlPlane
	W1009 20:21:17.574982   63744 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:17.575016   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:17.000623   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:17.015890   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:17.015963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:17.054136   64287 cri.go:89] found id: ""
	I1009 20:21:17.054166   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.054177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:17.054185   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:17.054242   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:17.089501   64287 cri.go:89] found id: ""
	I1009 20:21:17.089538   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.089548   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:17.089556   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:17.089614   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:17.128042   64287 cri.go:89] found id: ""
	I1009 20:21:17.128066   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.128073   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:17.128079   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:17.128126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:17.164663   64287 cri.go:89] found id: ""
	I1009 20:21:17.164689   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.164697   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:17.164703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:17.164766   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:17.200865   64287 cri.go:89] found id: ""
	I1009 20:21:17.200891   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.200899   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:17.200906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:17.200963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:17.241649   64287 cri.go:89] found id: ""
	I1009 20:21:17.241675   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.241683   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:17.241690   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:17.241749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:17.277390   64287 cri.go:89] found id: ""
	I1009 20:21:17.277424   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.277436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:17.277449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:17.277515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:17.316942   64287 cri.go:89] found id: ""
	I1009 20:21:17.316973   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.316985   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:17.316995   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:17.317015   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:17.360293   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:17.360322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:17.413510   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:17.413546   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:17.427280   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:17.427310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:17.509531   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:17.509551   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:17.509566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:18.880550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.881023   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:19.398723   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:21.899259   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.092463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:20.106101   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:20.106168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:20.147889   64287 cri.go:89] found id: ""
	I1009 20:21:20.147916   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.147925   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:20.147931   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:20.147980   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:20.183097   64287 cri.go:89] found id: ""
	I1009 20:21:20.183167   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.183179   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:20.183185   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:20.183233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:20.217556   64287 cri.go:89] found id: ""
	I1009 20:21:20.217585   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.217596   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:20.217604   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:20.217661   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:20.256692   64287 cri.go:89] found id: ""
	I1009 20:21:20.256717   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.256728   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:20.256735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:20.256797   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:20.290866   64287 cri.go:89] found id: ""
	I1009 20:21:20.290888   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.290896   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:20.290902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:20.290954   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:20.326802   64287 cri.go:89] found id: ""
	I1009 20:21:20.326828   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.326836   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:20.326842   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:20.326901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:20.362395   64287 cri.go:89] found id: ""
	I1009 20:21:20.362426   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.362436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:20.362442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:20.362504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:20.408354   64287 cri.go:89] found id: ""
	I1009 20:21:20.408381   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.408391   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:20.408400   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:20.408415   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:20.426669   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:20.426694   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:20.525895   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:20.525927   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:20.525939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:20.612620   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:20.612654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:20.653152   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:20.653179   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.205516   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:23.218432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:23.218493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:23.254327   64287 cri.go:89] found id: ""
	I1009 20:21:23.254355   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.254365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:23.254372   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:23.254429   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:23.295411   64287 cri.go:89] found id: ""
	I1009 20:21:23.295437   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.295448   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:23.295463   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:23.295523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:23.331631   64287 cri.go:89] found id: ""
	I1009 20:21:23.331661   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.331672   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:23.331679   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:23.331742   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:23.366114   64287 cri.go:89] found id: ""
	I1009 20:21:23.366139   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.366147   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:23.366152   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:23.366200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:23.403549   64287 cri.go:89] found id: ""
	I1009 20:21:23.403580   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.403587   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:23.403593   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:23.403652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:23.439231   64287 cri.go:89] found id: ""
	I1009 20:21:23.439254   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.439263   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:23.439268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:23.439322   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:23.473417   64287 cri.go:89] found id: ""
	I1009 20:21:23.473441   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.473449   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:23.473455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:23.473503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:23.506129   64287 cri.go:89] found id: ""
	I1009 20:21:23.506151   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.506159   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:23.506166   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:23.506176   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:23.546813   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:23.546836   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.599317   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:23.599346   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:23.612400   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:23.612426   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:23.684905   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:23.684924   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:23.684936   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:22.881084   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:25.380780   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:27.380875   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:23.899699   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.401044   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.267079   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:26.282873   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:26.282946   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:26.319632   64287 cri.go:89] found id: ""
	I1009 20:21:26.319657   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.319665   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:26.319671   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:26.319716   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:26.362263   64287 cri.go:89] found id: ""
	I1009 20:21:26.362290   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.362299   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:26.362306   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:26.362401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:26.412274   64287 cri.go:89] found id: ""
	I1009 20:21:26.412309   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.412320   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:26.412332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:26.412391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:26.446754   64287 cri.go:89] found id: ""
	I1009 20:21:26.446774   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.446783   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:26.446788   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:26.446838   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:26.480333   64287 cri.go:89] found id: ""
	I1009 20:21:26.480359   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.480367   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:26.480375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:26.480438   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:26.518440   64287 cri.go:89] found id: ""
	I1009 20:21:26.518469   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.518479   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:26.518486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:26.518555   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:26.555100   64287 cri.go:89] found id: ""
	I1009 20:21:26.555127   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.555138   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:26.555146   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:26.555208   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:26.594515   64287 cri.go:89] found id: ""
	I1009 20:21:26.594538   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.594550   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:26.594559   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:26.594573   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:26.647465   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:26.647511   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:26.661021   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:26.661042   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:26.732233   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:26.732265   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:26.732286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:26.813104   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:26.813143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:29.361485   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:29.374578   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:29.374647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:29.409740   64287 cri.go:89] found id: ""
	I1009 20:21:29.409766   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.409774   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:29.409781   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:29.409826   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:29.443932   64287 cri.go:89] found id: ""
	I1009 20:21:29.443959   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.443970   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:29.443978   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:29.444070   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:29.485900   64287 cri.go:89] found id: ""
	I1009 20:21:29.485927   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.485935   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:29.485940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:29.485994   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:29.527976   64287 cri.go:89] found id: ""
	I1009 20:21:29.528002   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.528013   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:29.528021   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:29.528080   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:29.572186   64287 cri.go:89] found id: ""
	I1009 20:21:29.572214   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.572235   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:29.572243   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:29.572310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:29.612166   64287 cri.go:89] found id: ""
	I1009 20:21:29.612190   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.612200   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:29.612208   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:29.612267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:29.880828   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:32.380494   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:28.897535   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:31.398369   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:29.646269   64287 cri.go:89] found id: ""
	I1009 20:21:29.646294   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.646312   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:29.646319   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:29.646375   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:29.680624   64287 cri.go:89] found id: ""
	I1009 20:21:29.680649   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.680656   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:29.680663   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:29.680673   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:29.729251   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:29.729278   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:29.742746   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:29.742773   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:29.815128   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:29.815150   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:29.815164   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:29.893418   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:29.893448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.433532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:32.447090   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:32.447161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:32.482662   64287 cri.go:89] found id: ""
	I1009 20:21:32.482688   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.482696   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:32.482702   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:32.482755   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:32.521292   64287 cri.go:89] found id: ""
	I1009 20:21:32.521321   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.521329   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:32.521337   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:32.521393   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:32.555868   64287 cri.go:89] found id: ""
	I1009 20:21:32.555894   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.555901   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:32.555906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:32.555956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:32.593541   64287 cri.go:89] found id: ""
	I1009 20:21:32.593563   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.593570   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:32.593575   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:32.593632   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:32.627712   64287 cri.go:89] found id: ""
	I1009 20:21:32.627740   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.627751   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:32.627758   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:32.627816   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:32.660632   64287 cri.go:89] found id: ""
	I1009 20:21:32.660658   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.660669   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:32.660677   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:32.660733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:32.697709   64287 cri.go:89] found id: ""
	I1009 20:21:32.697737   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.697748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:32.697755   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:32.697810   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:32.734782   64287 cri.go:89] found id: ""
	I1009 20:21:32.734806   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.734816   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:32.734827   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:32.734840   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:32.809239   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:32.809271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.857109   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:32.857143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:32.915156   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:32.915185   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:32.929782   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:32.929813   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:32.996321   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:34.380798   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:36.880717   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:33.399188   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.899631   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.497013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:35.510645   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:35.510714   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:35.543840   64287 cri.go:89] found id: ""
	I1009 20:21:35.543869   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.543878   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:35.543883   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:35.543929   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:35.579206   64287 cri.go:89] found id: ""
	I1009 20:21:35.579235   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.579246   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:35.579254   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:35.579312   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:35.613362   64287 cri.go:89] found id: ""
	I1009 20:21:35.613393   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.613406   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:35.613414   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:35.613484   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:35.649553   64287 cri.go:89] found id: ""
	I1009 20:21:35.649584   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.649596   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:35.649605   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:35.649672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:35.688665   64287 cri.go:89] found id: ""
	I1009 20:21:35.688695   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.688706   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:35.688714   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:35.688771   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:35.725958   64287 cri.go:89] found id: ""
	I1009 20:21:35.725979   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.725987   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:35.725993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:35.726047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:35.758368   64287 cri.go:89] found id: ""
	I1009 20:21:35.758395   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.758405   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:35.758410   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:35.758455   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:35.790323   64287 cri.go:89] found id: ""
	I1009 20:21:35.790347   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.790357   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:35.790367   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:35.790380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:35.843721   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:35.843752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:35.858894   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:35.858915   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:35.934242   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:35.934261   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:35.934273   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:36.016029   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:36.016062   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.554219   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:38.567266   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:38.567339   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:38.606292   64287 cri.go:89] found id: ""
	I1009 20:21:38.606328   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.606338   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:38.606344   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:38.606396   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:38.638807   64287 cri.go:89] found id: ""
	I1009 20:21:38.638831   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.638841   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:38.638849   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:38.638907   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:38.677635   64287 cri.go:89] found id: ""
	I1009 20:21:38.677665   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.677674   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:38.677682   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:38.677740   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:38.714847   64287 cri.go:89] found id: ""
	I1009 20:21:38.714870   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.714878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:38.714886   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:38.714944   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:38.746460   64287 cri.go:89] found id: ""
	I1009 20:21:38.746487   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.746495   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:38.746501   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:38.746554   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:38.782027   64287 cri.go:89] found id: ""
	I1009 20:21:38.782055   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.782066   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:38.782073   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:38.782130   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:38.816859   64287 cri.go:89] found id: ""
	I1009 20:21:38.816885   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.816893   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:38.816899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:38.816961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:38.857159   64287 cri.go:89] found id: ""
	I1009 20:21:38.857195   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.857204   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:38.857212   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:38.857224   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:38.913209   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:38.913240   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:38.927593   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:38.927617   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:38.998178   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:38.998213   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:38.998226   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:39.080681   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:39.080716   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.882054   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.874981   64109 pod_ready.go:82] duration metric: took 4m0.000684397s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:40.875008   64109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:40.875024   64109 pod_ready.go:39] duration metric: took 4m13.532570346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:40.875056   64109 kubeadm.go:597] duration metric: took 4m22.188345085s to restartPrimaryControlPlane
	W1009 20:21:40.875130   64109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:40.875162   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:38.397606   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.398216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:42.398390   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:41.620092   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:41.633491   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:41.633564   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:41.671087   64287 cri.go:89] found id: ""
	I1009 20:21:41.671114   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.671123   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:41.671128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:41.671184   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:41.706940   64287 cri.go:89] found id: ""
	I1009 20:21:41.706966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.706976   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:41.706984   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:41.707036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:41.745612   64287 cri.go:89] found id: ""
	I1009 20:21:41.745637   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.745646   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:41.745651   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:41.745706   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:41.786857   64287 cri.go:89] found id: ""
	I1009 20:21:41.786884   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.786895   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:41.786904   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:41.786958   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:41.825005   64287 cri.go:89] found id: ""
	I1009 20:21:41.825030   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.825041   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:41.825053   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:41.825100   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:41.863089   64287 cri.go:89] found id: ""
	I1009 20:21:41.863111   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.863118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:41.863124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:41.863169   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:41.907937   64287 cri.go:89] found id: ""
	I1009 20:21:41.907966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.907980   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:41.907988   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:41.908047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:41.948189   64287 cri.go:89] found id: ""
	I1009 20:21:41.948219   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.948229   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:41.948243   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:41.948257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:41.993008   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:41.993038   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:42.045831   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:42.045864   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:42.060255   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:42.060280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:42.127657   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:42.127680   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:42.127696   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:44.398696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:46.399642   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:43.855161   63744 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.280119061s)
	I1009 20:21:43.855245   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:43.871587   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:43.881677   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:43.891625   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:43.891646   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:43.891689   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:43.901651   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:43.901705   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:43.911179   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:43.920389   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:43.920436   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:43.929812   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.938937   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:43.938989   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.948454   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:43.958881   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:43.958924   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:43.970036   63744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:44.024453   63744 kubeadm.go:310] W1009 20:21:44.000704    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.025829   63744 kubeadm.go:310] W1009 20:21:44.002227    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.142191   63744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:44.713209   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:44.725754   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:44.725825   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:44.760976   64287 cri.go:89] found id: ""
	I1009 20:21:44.760997   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.761004   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:44.761011   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:44.761053   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:44.796955   64287 cri.go:89] found id: ""
	I1009 20:21:44.796977   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.796985   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:44.796991   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:44.797036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:44.832558   64287 cri.go:89] found id: ""
	I1009 20:21:44.832590   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.832601   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:44.832608   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:44.832667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:44.867869   64287 cri.go:89] found id: ""
	I1009 20:21:44.867898   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.867908   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:44.867916   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:44.867966   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:44.901395   64287 cri.go:89] found id: ""
	I1009 20:21:44.901423   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.901434   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:44.901442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:44.901505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:44.939276   64287 cri.go:89] found id: ""
	I1009 20:21:44.939310   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.939323   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:44.939337   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:44.939399   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:44.973692   64287 cri.go:89] found id: ""
	I1009 20:21:44.973719   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.973728   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:44.973734   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:44.973782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:45.007406   64287 cri.go:89] found id: ""
	I1009 20:21:45.007436   64287 logs.go:282] 0 containers: []
	W1009 20:21:45.007446   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:45.007457   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:45.007472   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:45.062199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:45.062233   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:45.075739   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:45.075763   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:45.147623   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:45.147639   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:45.147654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:45.229252   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:45.229286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:47.777208   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:47.794054   64287 kubeadm.go:597] duration metric: took 4m2.743382732s to restartPrimaryControlPlane
	W1009 20:21:47.794132   64287 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:47.794159   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:48.789863   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:48.804981   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:48.815981   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:48.826318   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:48.826340   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:48.826390   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:48.838918   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:48.838976   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:48.851635   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:48.864173   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:48.864237   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:48.874606   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.885036   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:48.885097   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.894870   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:48.904993   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:48.905040   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:48.915393   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:49.145081   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:52.033314   63744 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:21:52.033383   63744 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:21:52.033489   63744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:21:52.033625   63744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:21:52.033705   63744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:21:52.033799   63744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:21:52.035555   63744 out.go:235]   - Generating certificates and keys ...
	I1009 20:21:52.035638   63744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:21:52.035737   63744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:21:52.035861   63744 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:21:52.035951   63744 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:21:52.036043   63744 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:21:52.036135   63744 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:21:52.036233   63744 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:21:52.036325   63744 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:21:52.036431   63744 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:21:52.036584   63744 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:21:52.036656   63744 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:21:52.036737   63744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:21:52.036831   63744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:21:52.036914   63744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:21:52.036985   63744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:21:52.037077   63744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:21:52.037157   63744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:21:52.037280   63744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:21:52.037372   63744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:21:52.038777   63744 out.go:235]   - Booting up control plane ...
	I1009 20:21:52.038872   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:21:52.038995   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:21:52.039101   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:21:52.039242   63744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:21:52.039338   63744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:21:52.039393   63744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:21:52.039593   63744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:21:52.039746   63744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:21:52.039813   63744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005827851s
	I1009 20:21:52.039917   63744 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:21:52.039996   63744 kubeadm.go:310] [api-check] The API server is healthy after 4.502512954s
	I1009 20:21:52.040127   63744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:21:52.040319   63744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:21:52.040402   63744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:21:52.040606   63744 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-503330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:21:52.040684   63744 kubeadm.go:310] [bootstrap-token] Using token: 69fwjj.t1glswhsta5w4zx2
	I1009 20:21:52.042352   63744 out.go:235]   - Configuring RBAC rules ...
	I1009 20:21:52.042456   63744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:21:52.042526   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:21:52.042664   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:21:52.042773   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:21:52.042868   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:21:52.042948   63744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:21:52.043119   63744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:21:52.043184   63744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:21:52.043250   63744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:21:52.043258   63744 kubeadm.go:310] 
	I1009 20:21:52.043360   63744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:21:52.043377   63744 kubeadm.go:310] 
	I1009 20:21:52.043504   63744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:21:52.043516   63744 kubeadm.go:310] 
	I1009 20:21:52.043554   63744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:21:52.043639   63744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:21:52.043711   63744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:21:52.043721   63744 kubeadm.go:310] 
	I1009 20:21:52.043792   63744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:21:52.043800   63744 kubeadm.go:310] 
	I1009 20:21:52.043838   63744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:21:52.043844   63744 kubeadm.go:310] 
	I1009 20:21:52.043909   63744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:21:52.044021   63744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:21:52.044108   63744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:21:52.044117   63744 kubeadm.go:310] 
	I1009 20:21:52.044225   63744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:21:52.044350   63744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:21:52.044365   63744 kubeadm.go:310] 
	I1009 20:21:52.044462   63744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044591   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:21:52.044619   63744 kubeadm.go:310] 	--control-plane 
	I1009 20:21:52.044624   63744 kubeadm.go:310] 
	I1009 20:21:52.044732   63744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:21:52.044739   63744 kubeadm.go:310] 
	I1009 20:21:52.044842   63744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044956   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:21:52.044967   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:21:52.044973   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:21:52.047342   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:21:48.899752   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:51.398734   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:52.048508   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:21:52.060338   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:21:52.079526   63744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:21:52.079580   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.079669   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-503330 minikube.k8s.io/updated_at=2024_10_09T20_21_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=embed-certs-503330 minikube.k8s.io/primary=true
	I1009 20:21:52.296281   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.296296   63744 ops.go:34] apiserver oom_adj: -16
	I1009 20:21:52.796429   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.296570   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.797269   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.297261   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.797049   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.297194   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.796896   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.296658   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.796494   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.904248   63744 kubeadm.go:1113] duration metric: took 4.824720684s to wait for elevateKubeSystemPrivileges
	I1009 20:21:56.904284   63744 kubeadm.go:394] duration metric: took 5m1.847540023s to StartCluster
	I1009 20:21:56.904302   63744 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.904390   63744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:21:56.906918   63744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.907263   63744 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:56.907349   63744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:56.907451   63744 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-503330"
	I1009 20:21:56.907487   63744 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-503330"
	I1009 20:21:56.907486   63744 addons.go:69] Setting default-storageclass=true in profile "embed-certs-503330"
	W1009 20:21:56.907496   63744 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:21:56.907502   63744 addons.go:69] Setting metrics-server=true in profile "embed-certs-503330"
	I1009 20:21:56.907527   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907540   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:21:56.907529   63744 addons.go:234] Setting addon metrics-server=true in "embed-certs-503330"
	W1009 20:21:56.907616   63744 addons.go:243] addon metrics-server should already be in state true
	I1009 20:21:56.907642   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907508   63744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-503330"
	I1009 20:21:56.907976   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908018   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908038   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908061   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908072   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908105   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.909166   63744 out.go:177] * Verifying Kubernetes components...
	I1009 20:21:56.910945   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:56.924607   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1009 20:21:56.925089   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.925624   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.925643   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.926009   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.926194   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.927999   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1009 20:21:56.928182   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1009 20:21:56.929496   63744 addons.go:234] Setting addon default-storageclass=true in "embed-certs-503330"
	W1009 20:21:56.929513   63744 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:21:56.929533   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.929779   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.929804   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.930111   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930148   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930590   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930607   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930727   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930742   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930950   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931022   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931541   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.931583   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.932246   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.932292   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.945160   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I1009 20:21:56.945657   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.946102   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.946128   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.946469   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.947002   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.947044   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.951951   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I1009 20:21:56.952409   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.952851   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1009 20:21:56.953051   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953068   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.953331   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.953407   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.953561   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.953830   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953854   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.954204   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.954381   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.956314   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.956515   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.958947   63744 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:21:56.959026   63744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:53.898455   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:55.898680   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:57.899675   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:56.961002   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:21:56.961019   63744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:21:56.961036   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.961188   63744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.961206   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:56.961219   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.964087   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1009 20:21:56.964490   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.964644   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965040   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965298   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965511   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965539   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965577   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965600   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965876   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.965901   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.965901   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965958   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966041   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966083   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.966324   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.967052   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.967288   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.968690   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.968865   63744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.968880   63744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:56.968902   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.971293   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971661   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.971682   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971807   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.971975   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.972115   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.972249   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:57.140847   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:57.160702   63744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172751   63744 node_ready.go:49] node "embed-certs-503330" has status "Ready":"True"
	I1009 20:21:57.172781   63744 node_ready.go:38] duration metric: took 12.05112ms for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172794   63744 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:57.181089   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:21:57.242001   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:57.263153   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:21:57.263173   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:21:57.302934   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:21:57.302962   63744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:21:57.335796   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.335822   63744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:21:57.361537   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.418449   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:57.903919   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.903945   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904232   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904252   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:57.904261   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.904269   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904289   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:57.904560   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904578   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131399   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131433   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131434   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131451   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131717   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131742   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131750   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131762   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131792   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131796   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131847   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131861   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131869   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131972   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131986   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133342   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.133353   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.133363   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133372   63744 addons.go:475] Verifying addon metrics-server=true in "embed-certs-503330"
	I1009 20:21:58.148066   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.148090   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.148302   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.148304   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.148331   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.149874   63744 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1009 20:21:58.151249   63744 addons.go:510] duration metric: took 1.243909023s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1009 20:22:00.398702   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:02.898157   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:59.187137   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:01.686294   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:03.687302   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:04.187813   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:04.187838   63744 pod_ready.go:82] duration metric: took 7.006724226s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:04.187847   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693964   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.693989   63744 pod_ready.go:82] duration metric: took 1.506136012s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693999   63744 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698244   63744 pod_ready.go:93] pod "etcd-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.698263   63744 pod_ready.go:82] duration metric: took 4.258915ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698272   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702503   63744 pod_ready.go:93] pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.702523   63744 pod_ready.go:82] duration metric: took 4.24469ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702534   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706794   63744 pod_ready.go:93] pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.706814   63744 pod_ready.go:82] duration metric: took 4.272023ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706824   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785041   63744 pod_ready.go:93] pod "kube-proxy-k4sqz" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.785063   63744 pod_ready.go:82] duration metric: took 78.232276ms for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785072   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185082   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:06.185107   63744 pod_ready.go:82] duration metric: took 400.026614ms for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185118   63744 pod_ready.go:39] duration metric: took 9.012311475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:06.185134   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:06.185190   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:06.200274   63744 api_server.go:72] duration metric: took 9.292974134s to wait for apiserver process to appear ...
	I1009 20:22:06.200300   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:06.200319   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:22:06.204606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:22:06.205489   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:06.205507   63744 api_server.go:131] duration metric: took 5.200899ms to wait for apiserver health ...
	I1009 20:22:06.205515   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:06.387526   63744 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:06.387560   63744 system_pods.go:61] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.387566   63744 system_pods.go:61] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.387569   63744 system_pods.go:61] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.387572   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.387576   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.387580   63744 system_pods.go:61] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.387584   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.387589   63744 system_pods.go:61] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.387595   63744 system_pods.go:61] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.387604   63744 system_pods.go:74] duration metric: took 182.083801ms to wait for pod list to return data ...
	I1009 20:22:06.387614   63744 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:06.585261   63744 default_sa.go:45] found service account: "default"
	I1009 20:22:06.585283   63744 default_sa.go:55] duration metric: took 197.662514ms for default service account to be created ...
	I1009 20:22:06.585292   63744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:06.788380   63744 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:06.788405   63744 system_pods.go:89] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.788410   63744 system_pods.go:89] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.788414   63744 system_pods.go:89] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.788418   63744 system_pods.go:89] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.788421   63744 system_pods.go:89] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.788425   63744 system_pods.go:89] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.788428   63744 system_pods.go:89] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.788433   63744 system_pods.go:89] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.788437   63744 system_pods.go:89] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.788445   63744 system_pods.go:126] duration metric: took 203.147541ms to wait for k8s-apps to be running ...
	I1009 20:22:06.788454   63744 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:06.788493   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:06.808681   63744 system_svc.go:56] duration metric: took 20.217422ms WaitForService to wait for kubelet
	I1009 20:22:06.808710   63744 kubeadm.go:582] duration metric: took 9.901411942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:06.808733   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:06.984902   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:06.984932   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:06.984945   63744 node_conditions.go:105] duration metric: took 176.206313ms to run NodePressure ...
	I1009 20:22:06.984958   63744 start.go:241] waiting for startup goroutines ...
	I1009 20:22:06.984968   63744 start.go:246] waiting for cluster config update ...
	I1009 20:22:06.984981   63744 start.go:255] writing updated cluster config ...
	I1009 20:22:06.985286   63744 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:07.038935   63744 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:07.040555   63744 out.go:177] * Done! kubectl is now configured to use "embed-certs-503330" cluster and "default" namespace by default
	I1009 20:22:07.095426   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.220236459s)
	I1009 20:22:07.095500   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:07.112458   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:22:07.126942   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:22:07.140284   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:22:07.140304   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:22:07.140349   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:22:07.150051   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:22:07.150089   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:22:07.159508   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:22:07.169670   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:22:07.169724   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:22:07.179378   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.189534   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:22:07.189590   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.198752   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:22:07.207878   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:22:07.207922   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:22:07.217131   64109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:22:07.272837   64109 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:22:07.272983   64109 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:22:07.390966   64109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:22:07.391157   64109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:22:07.391298   64109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:22:07.402064   64109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:22:07.404170   64109 out.go:235]   - Generating certificates and keys ...
	I1009 20:22:07.404277   64109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:22:07.404377   64109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:22:07.404500   64109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:22:07.404594   64109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:22:07.404709   64109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:22:07.404798   64109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:22:07.404891   64109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:22:07.404980   64109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:22:07.405087   64109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:22:07.405184   64109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:22:07.405257   64109 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:22:07.405339   64109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:22:04.898623   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:06.899217   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:07.573252   64109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:22:07.929073   64109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:22:08.151802   64109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:22:08.220927   64109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:22:08.351546   64109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:22:08.352048   64109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:22:08.354486   64109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:22:08.356298   64109 out.go:235]   - Booting up control plane ...
	I1009 20:22:08.356416   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:22:08.356497   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:22:08.356564   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:22:08.376381   64109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:22:08.383479   64109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:22:08.383861   64109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:22:08.515158   64109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:22:08.515282   64109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:22:09.516371   64109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001248976s
	I1009 20:22:09.516460   64109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:22:09.398667   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:11.898547   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:14.518560   64109 kubeadm.go:310] [api-check] The API server is healthy after 5.002267352s
	I1009 20:22:14.535812   64109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:22:14.551918   64109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:22:14.575035   64109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:22:14.575281   64109 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-733270 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:22:14.589604   64109 kubeadm.go:310] [bootstrap-token] Using token: q60nq5.9zsgiaeid5aito18
	I1009 20:22:14.590971   64109 out.go:235]   - Configuring RBAC rules ...
	I1009 20:22:14.591128   64109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:22:14.597327   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:22:14.605584   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:22:14.608650   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:22:14.614771   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:22:14.618089   64109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:22:14.929271   64109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:22:15.378546   64109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:22:15.929242   64109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:22:15.930222   64109 kubeadm.go:310] 
	I1009 20:22:15.930305   64109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:22:15.930314   64109 kubeadm.go:310] 
	I1009 20:22:15.930395   64109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:22:15.930423   64109 kubeadm.go:310] 
	I1009 20:22:15.930468   64109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:22:15.930569   64109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:22:15.930635   64109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:22:15.930643   64109 kubeadm.go:310] 
	I1009 20:22:15.930711   64109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:22:15.930718   64109 kubeadm.go:310] 
	I1009 20:22:15.930758   64109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:22:15.930764   64109 kubeadm.go:310] 
	I1009 20:22:15.930807   64109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:22:15.930874   64109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:22:15.930933   64109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:22:15.930939   64109 kubeadm.go:310] 
	I1009 20:22:15.931013   64109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:22:15.931138   64109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:22:15.931150   64109 kubeadm.go:310] 
	I1009 20:22:15.931258   64109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931411   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:22:15.931450   64109 kubeadm.go:310] 	--control-plane 
	I1009 20:22:15.931460   64109 kubeadm.go:310] 
	I1009 20:22:15.931560   64109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:22:15.931569   64109 kubeadm.go:310] 
	I1009 20:22:15.931668   64109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931824   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:22:15.933191   64109 kubeadm.go:310] W1009 20:22:07.220393    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933602   64109 kubeadm.go:310] W1009 20:22:07.223065    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933757   64109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:22:15.933786   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:22:15.933800   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:22:15.935449   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:22:15.936759   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:22:15.947648   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:22:15.966343   64109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:22:15.966422   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:15.966483   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-733270 minikube.k8s.io/updated_at=2024_10_09T20_22_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=default-k8s-diff-port-733270 minikube.k8s.io/primary=true
	I1009 20:22:16.186232   64109 ops.go:34] apiserver oom_adj: -16
	I1009 20:22:16.186379   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:16.686824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:17.187316   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:14.398119   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:16.399791   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:17.687381   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.186824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.687500   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.187331   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.687194   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.767575   64109 kubeadm.go:1113] duration metric: took 3.801217416s to wait for elevateKubeSystemPrivileges
	I1009 20:22:19.767611   64109 kubeadm.go:394] duration metric: took 5m1.132732036s to StartCluster
	I1009 20:22:19.767631   64109 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.767719   64109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:22:19.769461   64109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.769695   64109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:22:19.769758   64109 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:22:19.769856   64109 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769884   64109 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-733270"
	I1009 20:22:19.769881   64109 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769894   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:22:19.769908   64109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733270"
	W1009 20:22:19.769897   64109 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:22:19.769970   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.769892   64109 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.770056   64109 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.770069   64109 addons.go:243] addon metrics-server should already be in state true
	I1009 20:22:19.770116   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.770324   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770356   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770364   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770392   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770486   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770522   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.771540   64109 out.go:177] * Verifying Kubernetes components...
	I1009 20:22:19.772979   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:22:19.785692   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I1009 20:22:19.785792   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I1009 20:22:19.786095   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786204   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786608   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786629   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786759   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786776   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786948   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.787422   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.787449   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.787843   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.788015   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.788974   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34217
	I1009 20:22:19.789282   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.789751   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.789772   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.791379   64109 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.791400   64109 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:22:19.791428   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.791601   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.791796   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.791834   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.792113   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.792147   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.806661   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1009 20:22:19.807178   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1009 20:22:19.807283   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807700   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807966   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.807989   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808200   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.808223   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808407   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.808629   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808811   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.810504   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810671   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1009 20:22:19.811047   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.811579   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.811602   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.811962   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.812375   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.812404   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.812666   64109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:22:19.812673   64109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:22:19.814145   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:22:19.814160   64109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:22:19.814173   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.814293   64109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:19.814308   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:22:19.814324   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.817244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818718   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.818744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818881   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.818956   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819037   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819240   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.819401   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.819677   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.819697   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.819713   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819831   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819990   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.820176   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.831920   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1009 20:22:19.832278   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.832725   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.832757   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.833093   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.833271   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.834841   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.835042   64109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:19.835074   64109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:22:19.835094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.837916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.838651   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838759   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.838927   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.839075   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.839216   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.968622   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:22:19.988987   64109 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005886   64109 node_ready.go:49] node "default-k8s-diff-port-733270" has status "Ready":"True"
	I1009 20:22:20.005909   64109 node_ready.go:38] duration metric: took 16.891882ms for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005920   64109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:20.015076   64109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:20.072480   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:22:20.072517   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:22:20.089167   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:20.101256   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:20.128261   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:22:20.128310   64109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:22:20.166749   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.166772   64109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:22:20.250822   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.802064   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802142   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802449   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802462   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802465   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802471   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802479   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802482   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802490   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802503   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.804339   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804345   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804381   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.804403   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804413   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804426   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.820127   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.820148   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.820509   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.820526   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.820558   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.348946   64109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.098079149s)
	I1009 20:22:21.349009   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349024   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349347   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349396   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349404   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349420   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349428   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349689   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349748   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349774   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349788   64109 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-733270"
	I1009 20:22:21.351765   64109 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1009 20:22:21.352876   64109 addons.go:510] duration metric: took 1.58312679s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1009 20:22:22.021876   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:18.401861   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:20.899295   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:24.521853   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.021730   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:23.399283   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:25.897649   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.897899   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:28.021952   64109 pod_ready.go:93] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.021974   64109 pod_ready.go:82] duration metric: took 8.006873591s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.021983   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026148   64109 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.026167   64109 pod_ready.go:82] duration metric: took 4.178272ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026176   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029955   64109 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.029976   64109 pod_ready.go:82] duration metric: took 3.792606ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029986   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033674   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.033690   64109 pod_ready.go:82] duration metric: took 3.698391ms for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033697   64109 pod_ready.go:39] duration metric: took 8.027766695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:28.033709   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:28.033754   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:28.057802   64109 api_server.go:72] duration metric: took 8.288077751s to wait for apiserver process to appear ...
	I1009 20:22:28.057830   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:28.057850   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:22:28.069876   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:22:28.071652   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:28.071676   64109 api_server.go:131] duration metric: took 13.838153ms to wait for apiserver health ...
	I1009 20:22:28.071684   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:28.083482   64109 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:28.083504   64109 system_pods.go:61] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.083509   64109 system_pods.go:61] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.083513   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.083516   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.083520   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.083523   64109 system_pods.go:61] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.083526   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.083531   64109 system_pods.go:61] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.083535   64109 system_pods.go:61] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.083542   64109 system_pods.go:74] duration metric: took 11.853134ms to wait for pod list to return data ...
	I1009 20:22:28.083548   64109 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:28.086146   64109 default_sa.go:45] found service account: "default"
	I1009 20:22:28.086165   64109 default_sa.go:55] duration metric: took 2.611433ms for default service account to be created ...
	I1009 20:22:28.086173   64109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:28.223233   64109 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:28.223260   64109 system_pods.go:89] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.223266   64109 system_pods.go:89] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.223270   64109 system_pods.go:89] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.223274   64109 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.223278   64109 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.223281   64109 system_pods.go:89] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.223285   64109 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.223291   64109 system_pods.go:89] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.223295   64109 system_pods.go:89] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.223303   64109 system_pods.go:126] duration metric: took 137.124429ms to wait for k8s-apps to be running ...
	I1009 20:22:28.223310   64109 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:28.223352   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:28.239300   64109 system_svc.go:56] duration metric: took 15.983195ms WaitForService to wait for kubelet
	I1009 20:22:28.239324   64109 kubeadm.go:582] duration metric: took 8.469605426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:28.239341   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:28.419917   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:28.419940   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:28.419951   64109 node_conditions.go:105] duration metric: took 180.606696ms to run NodePressure ...
	I1009 20:22:28.419962   64109 start.go:241] waiting for startup goroutines ...
	I1009 20:22:28.419969   64109 start.go:246] waiting for cluster config update ...
	I1009 20:22:28.419978   64109 start.go:255] writing updated cluster config ...
	I1009 20:22:28.420224   64109 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:28.467253   64109 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:28.469239   64109 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-733270" cluster and "default" namespace by default
	I1009 20:22:29.898528   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:31.897863   63427 pod_ready.go:82] duration metric: took 4m0.005763954s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:22:31.897884   63427 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 20:22:31.897892   63427 pod_ready.go:39] duration metric: took 4m2.806165062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:31.897906   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:31.897930   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:31.897972   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:31.945643   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:31.945667   63427 cri.go:89] found id: ""
	I1009 20:22:31.945677   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:31.945720   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.949923   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:31.950018   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:31.989365   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:31.989391   63427 cri.go:89] found id: ""
	I1009 20:22:31.989401   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:31.989451   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.993865   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:31.993926   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:32.030658   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.030678   63427 cri.go:89] found id: ""
	I1009 20:22:32.030685   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:32.030731   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.034587   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:32.034647   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:32.078482   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.078508   63427 cri.go:89] found id: ""
	I1009 20:22:32.078516   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:32.078570   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.082565   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:32.082626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:32.118355   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.118379   63427 cri.go:89] found id: ""
	I1009 20:22:32.118388   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:32.118444   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.123110   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:32.123170   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:32.163052   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.163077   63427 cri.go:89] found id: ""
	I1009 20:22:32.163085   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:32.163137   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.167085   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:32.167146   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:32.201126   63427 cri.go:89] found id: ""
	I1009 20:22:32.201149   63427 logs.go:282] 0 containers: []
	W1009 20:22:32.201156   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:32.201161   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:32.201217   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:32.242235   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.242259   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.242265   63427 cri.go:89] found id: ""
	I1009 20:22:32.242274   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:32.242337   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.247127   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.250692   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:32.250712   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.301343   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:32.301368   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:32.347256   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:32.347283   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:32.485223   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:32.485263   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.530013   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:32.530054   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:32.580422   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:32.580447   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:32.625202   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:32.625237   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.664203   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:32.664230   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.701753   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:32.701782   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.741584   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:32.741610   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.779976   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:32.780003   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:32.848844   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:32.848875   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:32.871387   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:32.871416   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:35.836255   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:35.853510   63427 api_server.go:72] duration metric: took 4m14.501873287s to wait for apiserver process to appear ...
	I1009 20:22:35.853541   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:35.853583   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:35.853626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:35.889199   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:35.889228   63427 cri.go:89] found id: ""
	I1009 20:22:35.889237   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:35.889299   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.893644   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:35.893706   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:35.934151   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:35.934178   63427 cri.go:89] found id: ""
	I1009 20:22:35.934188   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:35.934244   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.938561   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:35.938618   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:35.974555   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:35.974579   63427 cri.go:89] found id: ""
	I1009 20:22:35.974588   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:35.974639   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.978468   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:35.978514   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:36.014292   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.014316   63427 cri.go:89] found id: ""
	I1009 20:22:36.014324   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:36.014366   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.018618   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:36.018672   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:36.059334   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.059366   63427 cri.go:89] found id: ""
	I1009 20:22:36.059377   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:36.059436   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.063552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:36.063612   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:36.098384   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.098404   63427 cri.go:89] found id: ""
	I1009 20:22:36.098413   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:36.098464   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.102428   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:36.102490   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:36.140422   63427 cri.go:89] found id: ""
	I1009 20:22:36.140451   63427 logs.go:282] 0 containers: []
	W1009 20:22:36.140461   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:36.140467   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:36.140524   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:36.178576   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.178600   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.178604   63427 cri.go:89] found id: ""
	I1009 20:22:36.178610   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:36.178662   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.183208   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.186971   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:36.186994   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.222365   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:36.222389   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:36.652499   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:36.652533   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:36.700493   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:36.700523   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:36.715630   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:36.715657   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:36.757738   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:36.757766   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:36.793469   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:36.793491   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.833374   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:36.833400   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.894545   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:36.894579   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.932407   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:36.932441   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.969165   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:36.969198   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:37.039100   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:37.039138   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:37.141855   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:37.141889   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.701118   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:22:39.705369   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:22:39.706731   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:39.706750   63427 api_server.go:131] duration metric: took 3.853202912s to wait for apiserver health ...
	I1009 20:22:39.706757   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:39.706777   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:39.706821   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:39.745203   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.745227   63427 cri.go:89] found id: ""
	I1009 20:22:39.745234   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:39.745277   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.749708   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:39.749768   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:39.786606   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:39.786629   63427 cri.go:89] found id: ""
	I1009 20:22:39.786637   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:39.786681   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.790981   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:39.791036   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:39.826615   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:39.826635   63427 cri.go:89] found id: ""
	I1009 20:22:39.826642   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:39.826710   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.831189   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:39.831260   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:39.867300   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:39.867320   63427 cri.go:89] found id: ""
	I1009 20:22:39.867327   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:39.867373   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.871552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:39.871606   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:39.905493   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:39.905513   63427 cri.go:89] found id: ""
	I1009 20:22:39.905521   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:39.905565   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.910653   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:39.910704   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:39.952830   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:39.952848   63427 cri.go:89] found id: ""
	I1009 20:22:39.952856   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:39.952901   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.957366   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:39.957434   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:39.993913   63427 cri.go:89] found id: ""
	I1009 20:22:39.993936   63427 logs.go:282] 0 containers: []
	W1009 20:22:39.993943   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:39.993949   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:39.993993   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:40.036654   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.036680   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.036685   63427 cri.go:89] found id: ""
	I1009 20:22:40.036694   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:40.036752   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.041168   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.045050   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:40.045073   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:40.059862   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:40.059890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:40.098698   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:40.098725   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:40.136003   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:40.136028   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:40.192473   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:40.192499   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.228548   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:40.228575   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:40.634922   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:40.634956   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:40.701278   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:40.701313   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:40.813881   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:40.813915   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:40.874590   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:40.874619   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:40.916558   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:40.916585   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:40.959294   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:40.959323   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.997037   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:40.997065   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:43.555901   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:43.555933   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.555941   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.555947   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.555953   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.555957   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.555962   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.555973   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.555982   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.555992   63427 system_pods.go:74] duration metric: took 3.849229039s to wait for pod list to return data ...
	I1009 20:22:43.556003   63427 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:43.558563   63427 default_sa.go:45] found service account: "default"
	I1009 20:22:43.558582   63427 default_sa.go:55] duration metric: took 2.571282ms for default service account to be created ...
	I1009 20:22:43.558590   63427 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:43.563017   63427 system_pods.go:86] 8 kube-system pods found
	I1009 20:22:43.563036   63427 system_pods.go:89] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.563041   63427 system_pods.go:89] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.563045   63427 system_pods.go:89] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.563049   63427 system_pods.go:89] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.563052   63427 system_pods.go:89] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.563056   63427 system_pods.go:89] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.563074   63427 system_pods.go:89] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.563082   63427 system_pods.go:89] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.563091   63427 system_pods.go:126] duration metric: took 4.493122ms to wait for k8s-apps to be running ...
	I1009 20:22:43.563101   63427 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:43.563148   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:43.579410   63427 system_svc.go:56] duration metric: took 16.301009ms WaitForService to wait for kubelet
	I1009 20:22:43.579435   63427 kubeadm.go:582] duration metric: took 4m22.227803615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:43.579456   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:43.582061   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:43.582083   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:43.582095   63427 node_conditions.go:105] duration metric: took 2.633714ms to run NodePressure ...
	I1009 20:22:43.582108   63427 start.go:241] waiting for startup goroutines ...
	I1009 20:22:43.582118   63427 start.go:246] waiting for cluster config update ...
	I1009 20:22:43.582137   63427 start.go:255] writing updated cluster config ...
	I1009 20:22:43.582415   63427 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:43.628249   63427 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:43.630230   63427 out.go:177] * Done! kubectl is now configured to use "no-preload-480205" cluster and "default" namespace by default
	I1009 20:23:45.402502   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:23:45.402618   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:23:45.404210   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:45.404308   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:45.404415   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:45.404554   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:45.404699   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:45.404776   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:45.406561   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:45.406656   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:45.406713   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:45.406832   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:45.406929   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:45.407025   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:45.407132   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:45.407247   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:45.407350   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:45.407466   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:45.407586   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:45.407659   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:45.407756   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:45.407850   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:45.407937   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:45.408016   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:45.408074   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:45.408202   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:45.408335   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:45.408407   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:45.408510   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:45.410040   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:45.410141   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:45.410231   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:45.410330   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:45.410409   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:45.410546   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:23:45.410589   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:23:45.410653   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.410810   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.410872   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411059   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411164   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411367   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411428   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411606   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411674   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411825   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411832   64287 kubeadm.go:310] 
	I1009 20:23:45.411865   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:23:45.411909   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:23:45.411928   64287 kubeadm.go:310] 
	I1009 20:23:45.411974   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:23:45.412018   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:23:45.412138   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:23:45.412155   64287 kubeadm.go:310] 
	I1009 20:23:45.412300   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:23:45.412344   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:23:45.412393   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:23:45.412400   64287 kubeadm.go:310] 
	I1009 20:23:45.412516   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:23:45.412618   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:23:45.412631   64287 kubeadm.go:310] 
	I1009 20:23:45.412764   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:23:45.412885   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:23:45.412996   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:23:45.413059   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:23:45.413078   64287 kubeadm.go:310] 
	W1009 20:23:45.413176   64287 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:23:45.413219   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:23:45.881931   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:23:45.897391   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:23:45.907598   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:23:45.907621   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:23:45.907668   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:23:45.917540   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:23:45.917585   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:23:45.927278   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:23:45.937054   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:23:45.937109   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:23:45.946544   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.956863   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:23:45.956901   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.966184   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:23:45.975335   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:23:45.975385   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:23:45.984552   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:23:46.063271   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:46.063380   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:46.213340   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:46.213511   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:46.213652   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:46.388334   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:46.390196   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:46.390303   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:46.390384   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:46.390499   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:46.390606   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:46.390710   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:46.390799   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:46.390899   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:46.390975   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:46.391097   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:46.391196   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:46.391268   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:46.391355   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:46.513116   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:46.906952   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:47.053715   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:47.184809   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:47.207139   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:47.208338   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:47.208424   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:47.362764   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:47.364703   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:47.364823   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:47.377925   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:47.379842   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:47.380533   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:47.382819   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:24:27.385438   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:24:27.385546   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:27.385726   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:32.386071   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:32.386268   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:42.386802   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:42.386979   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:02.388082   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:02.388300   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.388787   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:42.389021   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.389080   64287 kubeadm.go:310] 
	I1009 20:25:42.389329   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:25:42.389524   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:25:42.389545   64287 kubeadm.go:310] 
	I1009 20:25:42.389625   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:25:42.389680   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:25:42.389832   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:25:42.389846   64287 kubeadm.go:310] 
	I1009 20:25:42.389963   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:25:42.390019   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:25:42.390066   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:25:42.390081   64287 kubeadm.go:310] 
	I1009 20:25:42.390201   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:25:42.390312   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:25:42.390321   64287 kubeadm.go:310] 
	I1009 20:25:42.390438   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:25:42.390550   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:25:42.390671   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:25:42.390779   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:25:42.390791   64287 kubeadm.go:310] 
	I1009 20:25:42.391382   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:25:42.391507   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:25:42.391606   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:25:42.391673   64287 kubeadm.go:394] duration metric: took 7m57.392748571s to StartCluster
	I1009 20:25:42.391719   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:25:42.391785   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:25:42.439581   64287 cri.go:89] found id: ""
	I1009 20:25:42.439610   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.439621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:25:42.439628   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:25:42.439695   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:25:42.476205   64287 cri.go:89] found id: ""
	I1009 20:25:42.476231   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.476238   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:25:42.476243   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:25:42.476297   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:25:42.528317   64287 cri.go:89] found id: ""
	I1009 20:25:42.528342   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.528350   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:25:42.528356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:25:42.528413   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:25:42.564857   64287 cri.go:89] found id: ""
	I1009 20:25:42.564885   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.564893   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:25:42.564899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:25:42.564956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:25:42.600053   64287 cri.go:89] found id: ""
	I1009 20:25:42.600081   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.600088   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:25:42.600094   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:25:42.600146   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:25:42.636997   64287 cri.go:89] found id: ""
	I1009 20:25:42.637026   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.637034   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:25:42.637047   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:25:42.637107   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:25:42.672228   64287 cri.go:89] found id: ""
	I1009 20:25:42.672255   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.672266   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:25:42.672273   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:25:42.672331   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:25:42.711696   64287 cri.go:89] found id: ""
	I1009 20:25:42.711727   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.711737   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:25:42.711749   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:25:42.711764   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:25:42.764839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:25:42.764876   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:25:42.778484   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:25:42.778512   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:25:42.864830   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:25:42.864859   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:25:42.864874   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:25:42.975355   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:25:42.975389   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:25:43.015247   64287 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:25:43.015307   64287 out.go:270] * 
	W1009 20:25:43.015375   64287 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.015392   64287 out.go:270] * 
	W1009 20:25:43.016664   64287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:25:43.020135   64287 out.go:201] 
	W1009 20:25:43.021388   64287 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.021427   64287 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:25:43.021453   64287 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:25:43.022804   64287 out.go:201] 
	
	
	==> CRI-O <==
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.030167327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506088030140967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95379d7a-f2b1-4000-b89c-83ea0835e2e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.030961770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=30f4fc1b-88a7-4ed6-a27a-d41bea1482ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.031021682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=30f4fc1b-88a7-4ed6-a27a-d41bea1482ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.031051385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=30f4fc1b-88a7-4ed6-a27a-d41bea1482ca name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.065130431Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aa8d7d41-9fb6-492c-908b-b52660b2d5d0 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.065274295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aa8d7d41-9fb6-492c-908b-b52660b2d5d0 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.066630951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bc81b5c-a9e5-4281-8d93-e8dcd783cbe8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.067070012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506088067049163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bc81b5c-a9e5-4281-8d93-e8dcd783cbe8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.067677569Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e04628a9-5c68-4286-9740-f042ee03c160 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.067739578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e04628a9-5c68-4286-9740-f042ee03c160 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.067774316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e04628a9-5c68-4286-9740-f042ee03c160 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.098367571Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba8361d3-17e4-4016-bbd9-d3523effa94b name=/runtime.v1.RuntimeService/Version
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.098434561Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba8361d3-17e4-4016-bbd9-d3523effa94b name=/runtime.v1.RuntimeService/Version
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.099612845Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f65d048-bc11-4379-bfbc-901b35b6a70a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.099987621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506088099961906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f65d048-bc11-4379-bfbc-901b35b6a70a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.100590423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f82ff206-c73d-4a36-a399-cd57c61a283a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.100647555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f82ff206-c73d-4a36-a399-cd57c61a283a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.100681603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f82ff206-c73d-4a36-a399-cd57c61a283a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.134133180Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1b415b7-d5af-4ccb-be55-d14b561325fb name=/runtime.v1.RuntimeService/Version
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.134261481Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1b415b7-d5af-4ccb-be55-d14b561325fb name=/runtime.v1.RuntimeService/Version
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.135308492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d15a312-2f85-4565-a944-439505f2a12c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.135678518Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506088135653372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d15a312-2f85-4565-a944-439505f2a12c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.136125619Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07b6b4e8-fbea-4021-a628-781977c14830 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.136177412Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07b6b4e8-fbea-4021-a628-781977c14830 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:34:48 old-k8s-version-169021 crio[636]: time="2024-10-09 20:34:48.136265199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=07b6b4e8-fbea-4021-a628-781977c14830 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 20:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051476] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041758] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.042560] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.485695] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.304560] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.057777] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071040] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.192125] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.124687] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.295888] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +6.664222] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.065570] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.848518] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +8.732358] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 9 20:21] systemd-fstab-generator[5090]: Ignoring "noauto" option for root device
	[Oct 9 20:23] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +0.064209] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:34:48 up 17 min,  0 users,  load average: 0.00, 0.02, 0.01
	Linux old-k8s-version-169021 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: goroutine 149 [chan receive]:
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000b96ea0)
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: goroutine 150 [select]:
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bcfef0, 0x4f0ac20, 0xc000b0f590, 0x1, 0xc0001000c0)
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000946c40, 0xc0001000c0)
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b2edf0, 0xc000b15ee0)
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6539]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 09 20:34:43 old-k8s-version-169021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 09 20:34:43 old-k8s-version-169021 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 09 20:34:43 old-k8s-version-169021 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6548]: I1009 20:34:43.957084    6548 server.go:416] Version: v1.20.0
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6548]: I1009 20:34:43.957486    6548 server.go:837] Client rotation is on, will bootstrap in background
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6548]: I1009 20:34:43.959717    6548 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6548]: I1009 20:34:43.960802    6548 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 09 20:34:43 old-k8s-version-169021 kubelet[6548]: W1009 20:34:43.960861    6548 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 2 (231.27595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-169021" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (413.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-503330 -n embed-certs-503330
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-09 20:38:02.897090552 +0000 UTC m=+6693.886863473
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-503330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-503330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.301µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-503330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-503330 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-503330 logs -n 25: (1.534976791s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-324052 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | disable-driver-mounts-324052                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:10 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503330            | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC | 09 Oct 24 20:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-733270  | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-480205                  | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169021        | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-503330                 | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-733270       | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:22 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169021             | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:36 UTC | 09 Oct 24 20:36 UTC |
	| start   | -p newest-cni-203991 --memory=2200 --alsologtostderr   | newest-cni-203991            | jenkins | v1.34.0 | 09 Oct 24 20:36 UTC | 09 Oct 24 20:37 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:37 UTC | 09 Oct 24 20:37 UTC |
	| start   | -p auto-665212 --memory=3072                           | auto-665212                  | jenkins | v1.34.0 | 09 Oct 24 20:37 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-203991             | newest-cni-203991            | jenkins | v1.34.0 | 09 Oct 24 20:37 UTC | 09 Oct 24 20:37 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-203991                                   | newest-cni-203991            | jenkins | v1.34.0 | 09 Oct 24 20:37 UTC | 09 Oct 24 20:37 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-203991                  | newest-cni-203991            | jenkins | v1.34.0 | 09 Oct 24 20:37 UTC | 09 Oct 24 20:37 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-203991 --memory=2200 --alsologtostderr   | newest-cni-203991            | jenkins | v1.34.0 | 09 Oct 24 20:37 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:37:58
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:37:58.877665   71747 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:37:58.877951   71747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:37:58.877961   71747 out.go:358] Setting ErrFile to fd 2...
	I1009 20:37:58.877965   71747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:37:58.878136   71747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:37:58.878646   71747 out.go:352] Setting JSON to false
	I1009 20:37:58.879610   71747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8420,"bootTime":1728497859,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:37:58.879698   71747 start.go:139] virtualization: kvm guest
	I1009 20:37:58.882041   71747 out.go:177] * [newest-cni-203991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:37:58.883913   71747 notify.go:220] Checking for updates...
	I1009 20:37:58.883934   71747 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:37:58.885579   71747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:37:58.886900   71747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:37:58.888253   71747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:37:58.889926   71747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:37:58.891206   71747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:37:58.892997   71747 config.go:182] Loaded profile config "newest-cni-203991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:37:58.893623   71747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:37:58.893682   71747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:37:58.909226   71747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35055
	I1009 20:37:58.909644   71747 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:37:58.910161   71747 main.go:141] libmachine: Using API Version  1
	I1009 20:37:58.910183   71747 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:37:58.910466   71747 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:37:58.910636   71747 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:58.910847   71747 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:37:58.911180   71747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:37:58.911238   71747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:37:58.925821   71747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I1009 20:37:58.926290   71747 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:37:58.926796   71747 main.go:141] libmachine: Using API Version  1
	I1009 20:37:58.926838   71747 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:37:58.927137   71747 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:37:58.927309   71747 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:58.962471   71747 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:37:58.963738   71747 start.go:297] selected driver: kvm2
	I1009 20:37:58.963751   71747 start.go:901] validating driver "kvm2" against &{Name:newest-cni-203991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-203991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:37:58.963890   71747 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:37:58.964845   71747 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:37:58.964945   71747 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:37:58.979289   71747 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:37:58.979768   71747 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:37:58.979811   71747 cni.go:84] Creating CNI manager for ""
	I1009 20:37:58.979879   71747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:37:58.979928   71747 start.go:340] cluster config:
	{Name:newest-cni-203991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-203991 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:37:58.980077   71747 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:37:58.982175   71747 out.go:177] * Starting "newest-cni-203991" primary control-plane node in "newest-cni-203991" cluster
	I1009 20:37:56.521898   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:37:56.522290   71356 main.go:141] libmachine: (auto-665212) DBG | unable to find current IP address of domain auto-665212 in network mk-auto-665212
	I1009 20:37:56.522313   71356 main.go:141] libmachine: (auto-665212) DBG | I1009 20:37:56.522256   71378 retry.go:31] will retry after 3.960376864s: waiting for machine to come up
	I1009 20:38:00.487479   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.487980   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has current primary IP address 192.168.39.85 and MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.488004   71356 main.go:141] libmachine: (auto-665212) Found IP for machine: 192.168.39.85
	I1009 20:38:00.488017   71356 main.go:141] libmachine: (auto-665212) Reserving static IP address...
	I1009 20:38:00.488331   71356 main.go:141] libmachine: (auto-665212) DBG | unable to find host DHCP lease matching {name: "auto-665212", mac: "52:54:00:5f:01:3b", ip: "192.168.39.85"} in network mk-auto-665212
	I1009 20:38:00.563842   71356 main.go:141] libmachine: (auto-665212) DBG | Getting to WaitForSSH function...
	I1009 20:38:00.563874   71356 main.go:141] libmachine: (auto-665212) Reserved static IP address: 192.168.39.85
	I1009 20:38:00.563887   71356 main.go:141] libmachine: (auto-665212) Waiting for SSH to be available...
	I1009 20:38:00.566266   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.566608   71356 main.go:141] libmachine: (auto-665212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:01:3b", ip: ""} in network mk-auto-665212: {Iface:virbr1 ExpiryTime:2024-10-09 21:37:51 +0000 UTC Type:0 Mac:52:54:00:5f:01:3b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:minikube Clientid:01:52:54:00:5f:01:3b}
	I1009 20:38:00.566636   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined IP address 192.168.39.85 and MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.566737   71356 main.go:141] libmachine: (auto-665212) DBG | Using SSH client type: external
	I1009 20:38:00.566763   71356 main.go:141] libmachine: (auto-665212) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/auto-665212/id_rsa (-rw-------)
	I1009 20:38:00.566807   71356 main.go:141] libmachine: (auto-665212) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/auto-665212/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:38:00.566824   71356 main.go:141] libmachine: (auto-665212) DBG | About to run SSH command:
	I1009 20:38:00.566849   71356 main.go:141] libmachine: (auto-665212) DBG | exit 0
	I1009 20:38:00.694978   71356 main.go:141] libmachine: (auto-665212) DBG | SSH cmd err, output: <nil>: 
	I1009 20:38:00.695252   71356 main.go:141] libmachine: (auto-665212) KVM machine creation complete!
	I1009 20:38:00.695677   71356 main.go:141] libmachine: (auto-665212) Calling .GetConfigRaw
	I1009 20:38:00.696234   71356 main.go:141] libmachine: (auto-665212) Calling .DriverName
	I1009 20:38:00.696413   71356 main.go:141] libmachine: (auto-665212) Calling .DriverName
	I1009 20:38:00.696559   71356 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 20:38:00.696572   71356 main.go:141] libmachine: (auto-665212) Calling .GetState
	I1009 20:38:00.697763   71356 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 20:38:00.697780   71356 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 20:38:00.697788   71356 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 20:38:00.697796   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHHostname
	I1009 20:38:00.699755   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.700027   71356 main.go:141] libmachine: (auto-665212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:01:3b", ip: ""} in network mk-auto-665212: {Iface:virbr1 ExpiryTime:2024-10-09 21:37:51 +0000 UTC Type:0 Mac:52:54:00:5f:01:3b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:auto-665212 Clientid:01:52:54:00:5f:01:3b}
	I1009 20:38:00.700049   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined IP address 192.168.39.85 and MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.700175   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHPort
	I1009 20:38:00.700357   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHKeyPath
	I1009 20:38:00.700503   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHKeyPath
	I1009 20:38:00.700606   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHUsername
	I1009 20:38:00.700744   71356 main.go:141] libmachine: Using SSH client type: native
	I1009 20:38:00.700931   71356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1009 20:38:00.700942   71356 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 20:38:00.802418   71356 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:38:00.802449   71356 main.go:141] libmachine: Detecting the provisioner...
	I1009 20:38:00.802459   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHHostname
	I1009 20:38:00.804988   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.805415   71356 main.go:141] libmachine: (auto-665212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:01:3b", ip: ""} in network mk-auto-665212: {Iface:virbr1 ExpiryTime:2024-10-09 21:37:51 +0000 UTC Type:0 Mac:52:54:00:5f:01:3b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:auto-665212 Clientid:01:52:54:00:5f:01:3b}
	I1009 20:38:00.805439   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined IP address 192.168.39.85 and MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.805564   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHPort
	I1009 20:38:00.805741   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHKeyPath
	I1009 20:38:00.805978   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHKeyPath
	I1009 20:38:00.806137   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHUsername
	I1009 20:38:00.806311   71356 main.go:141] libmachine: Using SSH client type: native
	I1009 20:38:00.806527   71356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1009 20:38:00.806542   71356 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 20:38:00.911685   71356 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 20:38:00.911785   71356 main.go:141] libmachine: found compatible host: buildroot
	I1009 20:38:00.911799   71356 main.go:141] libmachine: Provisioning with buildroot...
	I1009 20:38:00.911809   71356 main.go:141] libmachine: (auto-665212) Calling .GetMachineName
	I1009 20:38:00.912034   71356 buildroot.go:166] provisioning hostname "auto-665212"
	I1009 20:38:00.912055   71356 main.go:141] libmachine: (auto-665212) Calling .GetMachineName
	I1009 20:38:00.912225   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHHostname
	I1009 20:38:00.914460   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.914776   71356 main.go:141] libmachine: (auto-665212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:01:3b", ip: ""} in network mk-auto-665212: {Iface:virbr1 ExpiryTime:2024-10-09 21:37:51 +0000 UTC Type:0 Mac:52:54:00:5f:01:3b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:auto-665212 Clientid:01:52:54:00:5f:01:3b}
	I1009 20:38:00.914800   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined IP address 192.168.39.85 and MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:00.914949   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHPort
	I1009 20:38:00.915127   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHKeyPath
	I1009 20:38:00.915269   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHKeyPath
	I1009 20:38:00.915411   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHUsername
	I1009 20:38:00.915551   71356 main.go:141] libmachine: Using SSH client type: native
	I1009 20:38:00.915706   71356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1009 20:38:00.915717   71356 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-665212 && echo "auto-665212" | sudo tee /etc/hostname
	I1009 20:38:01.034922   71356 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-665212
	
	I1009 20:38:01.034945   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHHostname
	I1009 20:38:01.037780   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:01.038162   71356 main.go:141] libmachine: (auto-665212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:01:3b", ip: ""} in network mk-auto-665212: {Iface:virbr1 ExpiryTime:2024-10-09 21:37:51 +0000 UTC Type:0 Mac:52:54:00:5f:01:3b Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:auto-665212 Clientid:01:52:54:00:5f:01:3b}
	I1009 20:38:01.038183   71356 main.go:141] libmachine: (auto-665212) DBG | domain auto-665212 has defined IP address 192.168.39.85 and MAC address 52:54:00:5f:01:3b in network mk-auto-665212
	I1009 20:38:01.038424   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHPort
	I1009 20:38:01.038582   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHKeyPath
	I1009 20:38:01.038741   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHKeyPath
	I1009 20:38:01.038887   71356 main.go:141] libmachine: (auto-665212) Calling .GetSSHUsername
	I1009 20:38:01.039042   71356 main.go:141] libmachine: Using SSH client type: native
	I1009 20:38:01.039254   71356 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I1009 20:38:01.039270   71356 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-665212' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-665212/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-665212' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:37:58.983522   71747 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:37:58.983560   71747 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 20:37:58.983569   71747 cache.go:56] Caching tarball of preloaded images
	I1009 20:37:58.983651   71747 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:37:58.983664   71747 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 20:37:58.983787   71747 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/newest-cni-203991/config.json ...
	I1009 20:37:58.983960   71747 start.go:360] acquireMachinesLock for newest-cni-203991: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:38:01.992289   71747 start.go:364] duration metric: took 3.008304233s to acquireMachinesLock for "newest-cni-203991"
	I1009 20:38:01.992342   71747 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:38:01.992381   71747 fix.go:54] fixHost starting: 
	I1009 20:38:01.992782   71747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:38:01.992838   71747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:38:02.009887   71747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33413
	I1009 20:38:02.010328   71747 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:38:02.010864   71747 main.go:141] libmachine: Using API Version  1
	I1009 20:38:02.010889   71747 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:38:02.011205   71747 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:38:02.011391   71747 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:38:02.011528   71747 main.go:141] libmachine: (newest-cni-203991) Calling .GetState
	I1009 20:38:02.012962   71747 fix.go:112] recreateIfNeeded on newest-cni-203991: state=Stopped err=<nil>
	I1009 20:38:02.012987   71747 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	W1009 20:38:02.013130   71747 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:38:02.015092   71747 out.go:177] * Restarting existing kvm2 VM for "newest-cni-203991" ...
	
	
	==> CRI-O <==
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.592459167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506283592398528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0fd45394-ef25-4d02-98ad-2153efc02214 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.593194620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccbe48a2-0667-480d-818d-81f2b1442160 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.593285764Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccbe48a2-0667-480d-818d-81f2b1442160 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.593496785Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a,PodSandboxId:8f5916641bbd938fe668950ba48af3fe0e70037be279612e3a6bb7129951864c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318891359726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sttbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 453ffb79-d6d0-4ba4-baf6-cbc00df68cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db,PodSandboxId:da48c0dc35de9e83a87164ada9aff8fafd3665d0f39ed483a251fa727e03f63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318834041175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j62fb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ecbf7b08-3855-42ca-a144-2cada67e9d09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495,PodSandboxId:d92c670b7744760cad7029d9660c90c33a37327966d1a068234cb0bbee6bce88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728505318281342598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13817757-5de5-44be-9976-cb3bda284db8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7,PodSandboxId:22876380557a1cb5ae5857dbcf5b0eaab8a4824c7a1a43d7850dffe18a7c376c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728505316999122240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4sqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e699a0fc-e2f4-45b5-960b-54c2a4a35b87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515,PodSandboxId:b2bde796ac664c08376ff5ea5b551fde11248497ad63c9b9e5d1b7f2445665ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505306284544768,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52e5f8a8a8412af6c7521c2cc02f7ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc,PodSandboxId:d5072486eae960ecec1d6e0001e142334b8489d3618703126fab65c5432a6da3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505306239463143,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53,PodSandboxId:62eaadee2c3caa001429ab2a7ad1e7aca0fba67312de5c7ef6c21c7d26dff5ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505306259587642,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c3a1b12584c3dd3cb34e987e1c30d14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690,PodSandboxId:93886a9a91159df8564905e2469ae0a0127b9456c544f6b2b1ad01b3c4047116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505306171197804,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f8f73ffb2fe0d46539c0a20375dc20,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76,PodSandboxId:caa0e92eb3b98fc5487ce1aa790042a4ae596a43e51cccb868700fa7a3b89325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505018488151298,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccbe48a2-0667-480d-818d-81f2b1442160 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.637501871Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bb87ca93-556f-466e-9370-eb0a662359ef name=/runtime.v1.RuntimeService/Version
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.637600339Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bb87ca93-556f-466e-9370-eb0a662359ef name=/runtime.v1.RuntimeService/Version
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.638800607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f65f9125-3d0f-43d5-896a-bc685ed287d7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.639396211Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506283639373437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f65f9125-3d0f-43d5-896a-bc685ed287d7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.640025478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a0e3e99-bb6e-4742-8377-31011f6adf1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.640094623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a0e3e99-bb6e-4742-8377-31011f6adf1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.640428065Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a,PodSandboxId:8f5916641bbd938fe668950ba48af3fe0e70037be279612e3a6bb7129951864c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318891359726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sttbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 453ffb79-d6d0-4ba4-baf6-cbc00df68cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db,PodSandboxId:da48c0dc35de9e83a87164ada9aff8fafd3665d0f39ed483a251fa727e03f63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318834041175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j62fb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ecbf7b08-3855-42ca-a144-2cada67e9d09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495,PodSandboxId:d92c670b7744760cad7029d9660c90c33a37327966d1a068234cb0bbee6bce88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728505318281342598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13817757-5de5-44be-9976-cb3bda284db8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7,PodSandboxId:22876380557a1cb5ae5857dbcf5b0eaab8a4824c7a1a43d7850dffe18a7c376c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728505316999122240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4sqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e699a0fc-e2f4-45b5-960b-54c2a4a35b87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515,PodSandboxId:b2bde796ac664c08376ff5ea5b551fde11248497ad63c9b9e5d1b7f2445665ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505306284544768,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52e5f8a8a8412af6c7521c2cc02f7ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc,PodSandboxId:d5072486eae960ecec1d6e0001e142334b8489d3618703126fab65c5432a6da3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505306239463143,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53,PodSandboxId:62eaadee2c3caa001429ab2a7ad1e7aca0fba67312de5c7ef6c21c7d26dff5ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505306259587642,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c3a1b12584c3dd3cb34e987e1c30d14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690,PodSandboxId:93886a9a91159df8564905e2469ae0a0127b9456c544f6b2b1ad01b3c4047116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505306171197804,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f8f73ffb2fe0d46539c0a20375dc20,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76,PodSandboxId:caa0e92eb3b98fc5487ce1aa790042a4ae596a43e51cccb868700fa7a3b89325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505018488151298,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a0e3e99-bb6e-4742-8377-31011f6adf1d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.688062850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d5e7198-4356-42e5-91fb-efdbf16469d0 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.688184989Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d5e7198-4356-42e5-91fb-efdbf16469d0 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.690341261Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f249959-7962-4f66-a7c4-54fbb46478c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.691165454Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506283690861573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f249959-7962-4f66-a7c4-54fbb46478c8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.691942271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36d57ef5-94ea-451a-99b5-cb1d25d4d247 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.692129775Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36d57ef5-94ea-451a-99b5-cb1d25d4d247 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.692401642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a,PodSandboxId:8f5916641bbd938fe668950ba48af3fe0e70037be279612e3a6bb7129951864c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318891359726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sttbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 453ffb79-d6d0-4ba4-baf6-cbc00df68cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db,PodSandboxId:da48c0dc35de9e83a87164ada9aff8fafd3665d0f39ed483a251fa727e03f63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318834041175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j62fb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ecbf7b08-3855-42ca-a144-2cada67e9d09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495,PodSandboxId:d92c670b7744760cad7029d9660c90c33a37327966d1a068234cb0bbee6bce88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728505318281342598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13817757-5de5-44be-9976-cb3bda284db8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7,PodSandboxId:22876380557a1cb5ae5857dbcf5b0eaab8a4824c7a1a43d7850dffe18a7c376c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728505316999122240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4sqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e699a0fc-e2f4-45b5-960b-54c2a4a35b87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515,PodSandboxId:b2bde796ac664c08376ff5ea5b551fde11248497ad63c9b9e5d1b7f2445665ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505306284544768,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52e5f8a8a8412af6c7521c2cc02f7ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc,PodSandboxId:d5072486eae960ecec1d6e0001e142334b8489d3618703126fab65c5432a6da3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505306239463143,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53,PodSandboxId:62eaadee2c3caa001429ab2a7ad1e7aca0fba67312de5c7ef6c21c7d26dff5ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505306259587642,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c3a1b12584c3dd3cb34e987e1c30d14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690,PodSandboxId:93886a9a91159df8564905e2469ae0a0127b9456c544f6b2b1ad01b3c4047116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505306171197804,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f8f73ffb2fe0d46539c0a20375dc20,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76,PodSandboxId:caa0e92eb3b98fc5487ce1aa790042a4ae596a43e51cccb868700fa7a3b89325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505018488151298,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36d57ef5-94ea-451a-99b5-cb1d25d4d247 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.731750650Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bc252c53-d67d-4b45-83bf-814715e49330 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.731845593Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bc252c53-d67d-4b45-83bf-814715e49330 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.732733118Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7285ca4f-1bfd-46e6-ac78-a5330341d5e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.733224065Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506283733198685,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7285ca4f-1bfd-46e6-ac78-a5330341d5e8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.733778713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5d42fe6-50dc-455f-b770-ba1b0e6d59e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.733836547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5d42fe6-50dc-455f-b770-ba1b0e6d59e4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:38:03 embed-certs-503330 crio[704]: time="2024-10-09 20:38:03.734940817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a,PodSandboxId:8f5916641bbd938fe668950ba48af3fe0e70037be279612e3a6bb7129951864c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318891359726,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-sttbg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 453ffb79-d6d0-4ba4-baf6-cbc00df68cc2,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db,PodSandboxId:da48c0dc35de9e83a87164ada9aff8fafd3665d0f39ed483a251fa727e03f63d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505318834041175,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-j62fb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: ecbf7b08-3855-42ca-a144-2cada67e9d09,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495,PodSandboxId:d92c670b7744760cad7029d9660c90c33a37327966d1a068234cb0bbee6bce88,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAI
NER_RUNNING,CreatedAt:1728505318281342598,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 13817757-5de5-44be-9976-cb3bda284db8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7,PodSandboxId:22876380557a1cb5ae5857dbcf5b0eaab8a4824c7a1a43d7850dffe18a7c376c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728505316999122240,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k4sqz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e699a0fc-e2f4-45b5-960b-54c2a4a35b87,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515,PodSandboxId:b2bde796ac664c08376ff5ea5b551fde11248497ad63c9b9e5d1b7f2445665ea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505306284544768,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d52e5f8a8a8412af6c7521c2cc02f7ce,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc,PodSandboxId:d5072486eae960ecec1d6e0001e142334b8489d3618703126fab65c5432a6da3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505306239463143,Labels:map[string]string{io.kubernetes.container.n
ame: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53,PodSandboxId:62eaadee2c3caa001429ab2a7ad1e7aca0fba67312de5c7ef6c21c7d26dff5ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505306259587642,Labels:map[string]string{io.kubernetes.container.name: kube-sc
heduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c3a1b12584c3dd3cb34e987e1c30d14,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690,PodSandboxId:93886a9a91159df8564905e2469ae0a0127b9456c544f6b2b1ad01b3c4047116,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505306171197804,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8f8f73ffb2fe0d46539c0a20375dc20,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76,PodSandboxId:caa0e92eb3b98fc5487ce1aa790042a4ae596a43e51cccb868700fa7a3b89325,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505018488151298,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-503330,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76f1adc2f31e6aeea0330992973b4261,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5d42fe6-50dc-455f-b770-ba1b0e6d59e4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f54c3ad65ef8d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   8f5916641bbd9       coredns-7c65d6cfc9-sttbg
	0929c43db517c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   da48c0dc35de9       coredns-7c65d6cfc9-j62fb
	a4b1466595b03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   d92c670b77447       storage-provisioner
	f0fe16f40d36b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   22876380557a1       kube-proxy-k4sqz
	690ad9c304dde       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   b2bde796ac664       etcd-embed-certs-503330
	e84e79116fa9d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   62eaadee2c3ca       kube-scheduler-embed-certs-503330
	48c2502451c29       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   d5072486eae96       kube-apiserver-embed-certs-503330
	a4c55d4cc5526       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   93886a9a91159       kube-controller-manager-embed-certs-503330
	6c6d9ae1a9bc9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   caa0e92eb3b98       kube-apiserver-embed-certs-503330
	
	
	==> coredns [0929c43db517c98382d75c360c5cee0a2d2ddda1a666de61823a52af3f24e0db] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f54c3ad65ef8d320409101d05791d94db126065de6fd7001b6bc602e264f438a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-503330
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-503330
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=embed-certs-503330
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T20_21_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:21:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-503330
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:38:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:37:20 +0000   Wed, 09 Oct 2024 20:21:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:37:20 +0000   Wed, 09 Oct 2024 20:21:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:37:20 +0000   Wed, 09 Oct 2024 20:21:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:37:20 +0000   Wed, 09 Oct 2024 20:21:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.97
	  Hostname:    embed-certs-503330
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4243dbb5d07040f1ad6a69aba7094125
	  System UUID:                4243dbb5-d070-40f1-ad6a-69aba7094125
	  Boot ID:                    ddf6df5b-081d-4a26-9b14-4a310973fe13
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-j62fb                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-sttbg                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-503330                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-503330             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-503330    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-k4sqz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-503330             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-79m5x               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node embed-certs-503330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node embed-certs-503330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node embed-certs-503330 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node embed-certs-503330 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node embed-certs-503330 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node embed-certs-503330 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node embed-certs-503330 event: Registered Node embed-certs-503330 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050452] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040101] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.844461] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556668] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.610089] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.746912] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.112208] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.169664] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.166908] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.306774] systemd-fstab-generator[694]: Ignoring "noauto" option for root device
	[  +4.025426] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +1.990110] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +0.067524] kauditd_printk_skb: 158 callbacks suppressed
	[Oct 9 20:17] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.817807] kauditd_printk_skb: 85 callbacks suppressed
	[Oct 9 20:21] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.405960] systemd-fstab-generator[2567]: Ignoring "noauto" option for root device
	[  +4.639817] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.432243] systemd-fstab-generator[2893]: Ignoring "noauto" option for root device
	[  +5.832776] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.061212] systemd-fstab-generator[3033]: Ignoring "noauto" option for root device
	[Oct 9 20:22] kauditd_printk_skb: 88 callbacks suppressed
	
	
	==> etcd [690ad9c304dde9ab9e6335a6b839c72b263434f0b70b119b00554292baf9c515] <==
	{"level":"info","ts":"2024-10-09T20:21:47.316212Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1f2cc3497df204b1","local-member-attributes":"{Name:embed-certs-503330 ClientURLs:[https://192.168.50.97:2379]}","request-path":"/0/members/1f2cc3497df204b1/attributes","cluster-id":"a36d2e63d2f8b676","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:21:47.316378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:21:47.316754Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a36d2e63d2f8b676","local-member-id":"1f2cc3497df204b1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:21:47.316849Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:21:47.316873Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:21:47.316898Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:21:47.316924Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:21:47.316932Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:21:47.318177Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:21:47.319370Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T20:21:47.322546Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:21:47.323264Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.97:2379"}
	{"level":"info","ts":"2024-10-09T20:31:47.365105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":721}
	{"level":"info","ts":"2024-10-09T20:31:47.376653Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":721,"took":"10.02854ms","hash":3790351902,"current-db-size-bytes":2207744,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2207744,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-10-09T20:31:47.376775Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3790351902,"revision":721,"compact-revision":-1}
	{"level":"info","ts":"2024-10-09T20:36:47.376427Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2024-10-09T20:36:47.380701Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":963,"took":"3.960739ms","hash":4080457165,"current-db-size-bytes":2207744,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1552384,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-09T20:36:47.380745Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4080457165,"revision":963,"compact-revision":721}
	{"level":"info","ts":"2024-10-09T20:37:10.837560Z","caller":"traceutil/trace.go:171","msg":"trace[923446609] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"100.965013ms","start":"2024-10-09T20:37:10.736567Z","end":"2024-10-09T20:37:10.837532Z","steps":["trace[923446609] 'process raft request'  (duration: 92.637842ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:37:31.588895Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.656336ms","expected-duration":"100ms","prefix":"","request":"header:<ID:338212469411286574 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-503330\" mod_revision:1238 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-503330\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-503330\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-09T20:37:31.589781Z","caller":"traceutil/trace.go:171","msg":"trace[554384905] linearizableReadLoop","detail":"{readStateIndex:1450; appliedIndex:1449; }","duration":"136.326973ms","start":"2024-10-09T20:37:31.453435Z","end":"2024-10-09T20:37:31.589762Z","steps":["trace[554384905] 'read index received'  (duration: 29.425µs)","trace[554384905] 'applied index is now lower than readState.Index'  (duration: 136.295981ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T20:37:31.589901Z","caller":"traceutil/trace.go:171","msg":"trace[688572219] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"370.68343ms","start":"2024-10-09T20:37:31.219208Z","end":"2024-10-09T20:37:31.589892Z","steps":["trace[688572219] 'process raft request'  (duration: 115.205008ms)","trace[688572219] 'compare'  (duration: 253.338896ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:37:31.590058Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-09T20:37:31.219189Z","time spent":"370.762812ms","remote":"127.0.0.1:34562","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":560,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-503330\" mod_revision:1238 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-503330\" value_size:501 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-503330\" > >"}
	{"level":"warn","ts":"2024-10-09T20:37:31.590391Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.946844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T20:37:31.590450Z","caller":"traceutil/trace.go:171","msg":"trace[1333867085] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1246; }","duration":"137.008669ms","start":"2024-10-09T20:37:31.453431Z","end":"2024-10-09T20:37:31.590439Z","steps":["trace[1333867085] 'agreement among raft nodes before linearized reading'  (duration: 136.923096ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:38:04 up 21 min,  0 users,  load average: 0.67, 0.19, 0.11
	Linux embed-certs-503330 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [48c2502451c2994d52482f59656eca9eebd66afe06e84cd8b32a78069d8defdc] <==
	I1009 20:34:49.831770       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:34:49.833031       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:36:48.830500       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:36:48.830858       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1009 20:36:49.832857       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:36:49.832916       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1009 20:36:49.833065       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:36:49.833130       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:36:49.834050       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:36:49.835221       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:37:49.834593       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:37:49.834683       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 20:37:49.835874       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:37:49.836030       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:37:49.836104       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:37:49.837360       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [6c6d9ae1a9bc92ea0e6565398d986dd1b9595a7177ae73a9100d513ffdf29d76] <==
	W1009 20:21:38.670273       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.676832       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.683520       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.724639       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.768849       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.807314       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.813834       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.903275       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.919264       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.962072       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:38.992942       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:39.011923       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:39.020622       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:39.049288       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:39.203365       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:42.487057       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:42.847644       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:42.873443       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.005351       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.095099       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.178352       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.280463       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.306273       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.333778       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:21:43.408696       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a4c55d4cc5526f7dfb596dddbdc06f6e90237d65de2ea96fec924fa3e9162690] <==
	E1009 20:32:55.703951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:32:56.386288       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:32:58.393166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="233.857µs"
	I1009 20:33:11.384315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="55.004µs"
	E1009 20:33:25.709472       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:33:26.401603       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:33:55.715603       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:33:56.410348       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:34:25.721691       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:34:26.419775       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:34:55.728896       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:34:56.427882       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:35:25.735684       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:35:26.440625       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:35:55.742167       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:35:56.448424       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:36:25.748635       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:36:26.457411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:36:55.758688       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:36:56.465145       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:37:20.947794       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-503330"
	E1009 20:37:25.765752       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:37:26.484151       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:37:55.772033       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:37:56.492311       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f0fe16f40d36b582b55e7bbff503d8c66a3307b7161d3b0ca30f7a308e415dd7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:21:57.263796       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:21:57.273819       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.97"]
	E1009 20:21:57.274014       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:21:57.355199       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:21:57.355239       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:21:57.355269       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:21:57.358020       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:21:57.358274       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:21:57.358286       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:21:57.360401       1 config.go:199] "Starting service config controller"
	I1009 20:21:57.360435       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:21:57.360463       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:21:57.360470       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:21:57.366722       1 config.go:328] "Starting node config controller"
	I1009 20:21:57.366737       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:21:57.462183       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 20:21:57.462250       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:21:57.468485       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e84e79116fa9d13602f9c375cb77344c344ba2463d740b03cedc1855bc6a3f53] <==
	W1009 20:21:48.848919       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:48.849027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.685846       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:21:49.685913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.700434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:21:49.700499       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.713865       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:49.714205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.737208       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:21:49.737238       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1009 20:21:49.793303       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:21:49.793632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.803322       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 20:21:49.803415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.832192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:21:49.832312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.950353       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:49.950483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:49.999430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:50.000883       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:50.043680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 20:21:50.043923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:21:50.102040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:21:50.102119       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1009 20:21:52.437260       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:37:02 embed-certs-503330 kubelet[2900]: E1009 20:37:02.365604    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:37:11 embed-certs-503330 kubelet[2900]: E1009 20:37:11.608237    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506231607802739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:11 embed-certs-503330 kubelet[2900]: E1009 20:37:11.608291    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506231607802739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:13 embed-certs-503330 kubelet[2900]: E1009 20:37:13.366596    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:37:21 embed-certs-503330 kubelet[2900]: E1009 20:37:21.609550    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506241609180307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:21 embed-certs-503330 kubelet[2900]: E1009 20:37:21.609871    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506241609180307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:27 embed-certs-503330 kubelet[2900]: E1009 20:37:27.366296    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:37:31 embed-certs-503330 kubelet[2900]: E1009 20:37:31.612106    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506251611576140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:31 embed-certs-503330 kubelet[2900]: E1009 20:37:31.612533    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506251611576140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:40 embed-certs-503330 kubelet[2900]: E1009 20:37:40.365251    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:37:41 embed-certs-503330 kubelet[2900]: E1009 20:37:41.614902    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506261614185203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:41 embed-certs-503330 kubelet[2900]: E1009 20:37:41.615130    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506261614185203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]: E1009 20:37:51.389066    2900 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]: E1009 20:37:51.389129    2900 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]: E1009 20:37:51.389313    2900 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkrq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-79m5x_kube-system(c28befcf-7206-4b43-a6ef-6fa017fac7a5): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]: E1009 20:37:51.391348    2900 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-79m5x" podUID="c28befcf-7206-4b43-a6ef-6fa017fac7a5"
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]: E1009 20:37:51.394105    2900 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]: E1009 20:37:51.616818    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506271616600567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:51 embed-certs-503330 kubelet[2900]: E1009 20:37:51.616859    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506271616600567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:38:01 embed-certs-503330 kubelet[2900]: E1009 20:38:01.619092    2900 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506281618662840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:38:01 embed-certs-503330 kubelet[2900]: E1009 20:38:01.619158    2900 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506281618662840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [a4b1466595b03e68c22f0000845b5dcf03c840e3054a6d487e5f1c62e96da495] <==
	I1009 20:21:58.406578       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:21:58.418393       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:21:58.418573       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:21:58.443273       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:21:58.449775       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"528b0580-21de-4f83-ac54-e262fc998faf", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-503330_981d94a8-6d60-477c-a2cd-0638367cb7ae became leader
	I1009 20:21:58.450059       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-503330_981d94a8-6d60-477c-a2cd-0638367cb7ae!
	I1009 20:21:58.550420       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-503330_981d94a8-6d60-477c-a2cd-0638367cb7ae!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-503330 -n embed-certs-503330
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-503330 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-79m5x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-503330 describe pod metrics-server-6867b74b74-79m5x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-503330 describe pod metrics-server-6867b74b74-79m5x: exit status 1 (88.611085ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-79m5x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-503330 describe pod metrics-server-6867b74b74-79m5x: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (413.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (479.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-09 20:39:30.162151289 +0000 UTC m=+6781.151924208
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-733270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-733270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.6µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-733270 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-733270 logs -n 25
E1009 20:39:31.107581   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-733270 logs -n 25: (1.336728058s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | status kubelet --all --full                          |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | cat kubelet --no-pager                               |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo journalctl                       | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | -xeu kubelet --all --full                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo cat                              | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo cat                              | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC |                     |
	|         | status docker --all --full                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | cat docker --no-pager                                |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo cat                              | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo docker                           | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC |                     |
	|         | status cri-docker --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | cat cri-docker --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo cat                              | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo cat                              | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo                                  | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC |                     |
	|         | status containerd --all --full                       |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | cat containerd --no-pager                            |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo cat                              | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo cat                              | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo containerd                       | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | config dump                                          |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | status crio --all --full                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo systemctl                        | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | cat crio --no-pager                                  |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo find                             | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p auto-665212 sudo crio                             | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p auto-665212                                       | auto-665212           | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC | 09 Oct 24 20:39 UTC |
	| start   | -p custom-flannel-665212                             | custom-flannel-665212 | jenkins | v1.34.0 | 09 Oct 24 20:39 UTC |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=kvm2                                        |                       |         |         |                     |                     |
	|         | --container-runtime=crio                             |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:39:16
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:39:16.469668   74576 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:39:16.469781   74576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:39:16.469791   74576 out.go:358] Setting ErrFile to fd 2...
	I1009 20:39:16.469796   74576 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:39:16.469994   74576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:39:16.470600   74576 out.go:352] Setting JSON to false
	I1009 20:39:16.471694   74576 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8497,"bootTime":1728497859,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:39:16.471784   74576 start.go:139] virtualization: kvm guest
	I1009 20:39:16.474020   74576 out.go:177] * [custom-flannel-665212] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:39:16.475183   74576 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:39:16.475185   74576 notify.go:220] Checking for updates...
	I1009 20:39:16.476396   74576 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:39:16.477660   74576 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:39:16.478925   74576 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:39:16.480044   74576 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:39:16.481172   74576 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:39:16.483173   74576 config.go:182] Loaded profile config "calico-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:39:16.483334   74576 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:39:16.483455   74576 config.go:182] Loaded profile config "kindnet-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:39:16.483561   74576 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:39:16.522264   74576 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 20:39:16.523574   74576 start.go:297] selected driver: kvm2
	I1009 20:39:16.523593   74576 start.go:901] validating driver "kvm2" against <nil>
	I1009 20:39:16.523607   74576 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:39:16.524361   74576 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:39:16.524446   74576 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:39:16.545049   74576 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:39:16.545095   74576 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 20:39:16.545369   74576 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:39:16.545398   74576 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1009 20:39:16.545410   74576 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1009 20:39:16.545485   74576 start.go:340] cluster config:
	{Name:custom-flannel-665212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-665212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:39:16.545625   74576 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:39:16.547512   74576 out.go:177] * Starting "custom-flannel-665212" primary control-plane node in "custom-flannel-665212" cluster
	I1009 20:39:16.548994   74576 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:39:16.549048   74576 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 20:39:16.549061   74576 cache.go:56] Caching tarball of preloaded images
	I1009 20:39:16.549190   74576 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:39:16.549206   74576 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 20:39:16.549369   74576 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/custom-flannel-665212/config.json ...
	I1009 20:39:16.549408   74576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/custom-flannel-665212/config.json: {Name:mk905fe2b3ac41f4630113c210bebf90e293e81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:39:16.549596   74576 start.go:360] acquireMachinesLock for custom-flannel-665212: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:39:16.549644   74576 start.go:364] duration metric: took 24.094µs to acquireMachinesLock for "custom-flannel-665212"
	I1009 20:39:16.549665   74576 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-665212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-665212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:39:16.549757   74576 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 20:39:16.233944   72973 main.go:141] libmachine: (calico-665212) DBG | domain calico-665212 has defined MAC address 52:54:00:fb:46:3d in network mk-calico-665212
	I1009 20:39:16.234092   72973 main.go:141] libmachine: (calico-665212) DBG | domain calico-665212 has defined MAC address 52:54:00:fb:46:3d in network mk-calico-665212
	I1009 20:39:16.235756   72973 main.go:141] libmachine: (calico-665212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:46:3d", ip: ""} in network mk-calico-665212: {Iface:virbr3 ExpiryTime:2024-10-09 21:39:03 +0000 UTC Type:0 Mac:52:54:00:fb:46:3d Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:calico-665212 Clientid:01:52:54:00:fb:46:3d}
	I1009 20:39:16.235794   72973 main.go:141] libmachine: (calico-665212) DBG | domain calico-665212 has defined IP address 192.168.61.246 and MAC address 52:54:00:fb:46:3d in network mk-calico-665212
	I1009 20:39:16.235874   72973 main.go:141] libmachine: (calico-665212) Calling .GetSSHPort
	I1009 20:39:16.235955   72973 main.go:141] libmachine: (calico-665212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:46:3d", ip: ""} in network mk-calico-665212: {Iface:virbr3 ExpiryTime:2024-10-09 21:39:03 +0000 UTC Type:0 Mac:52:54:00:fb:46:3d Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:calico-665212 Clientid:01:52:54:00:fb:46:3d}
	I1009 20:39:16.236001   72973 main.go:141] libmachine: (calico-665212) DBG | domain calico-665212 has defined IP address 192.168.61.246 and MAC address 52:54:00:fb:46:3d in network mk-calico-665212
	I1009 20:39:16.236049   72973 main.go:141] libmachine: (calico-665212) Calling .GetSSHPort
	I1009 20:39:16.236130   72973 main.go:141] libmachine: (calico-665212) Calling .GetSSHKeyPath
	I1009 20:39:16.236410   72973 main.go:141] libmachine: (calico-665212) Calling .GetSSHKeyPath
	I1009 20:39:16.236498   72973 main.go:141] libmachine: (calico-665212) Calling .GetSSHUsername
	I1009 20:39:16.236573   72973 main.go:141] libmachine: (calico-665212) Calling .GetSSHUsername
	I1009 20:39:16.236654   72973 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/calico-665212/id_rsa Username:docker}
	I1009 20:39:16.236748   72973 sshutil.go:53] new ssh client: &{IP:192.168.61.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/calico-665212/id_rsa Username:docker}
	I1009 20:39:16.333286   72973 ssh_runner.go:195] Run: systemctl --version
	I1009 20:39:16.339654   72973 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:39:16.504570   72973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:39:16.510466   72973 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:39:16.510528   72973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:39:16.526322   72973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:39:16.526346   72973 start.go:495] detecting cgroup driver to use...
	I1009 20:39:16.526398   72973 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:39:16.546424   72973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:39:16.562175   72973 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:39:16.562230   72973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:39:16.580664   72973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:39:16.599043   72973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:39:16.757012   72973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:39:16.935696   72973 docker.go:233] disabling docker service ...
	I1009 20:39:16.935760   72973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:39:16.952229   72973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:39:16.967079   72973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:39:17.101297   72973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:39:17.220885   72973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:39:17.236762   72973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:39:17.256157   72973 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:39:17.256219   72973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:39:17.266941   72973 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:39:17.267006   72973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:39:17.278337   72973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:39:17.289822   72973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:39:17.301090   72973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:39:17.312495   72973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:39:17.323852   72973 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:39:17.342020   72973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:39:17.352978   72973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:39:17.363522   72973 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:39:17.363584   72973 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:39:17.377504   72973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:39:17.387945   72973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:39:17.514870   72973 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:39:17.630267   72973 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:39:17.630345   72973 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:39:17.636112   72973 start.go:563] Will wait 60s for crictl version
	I1009 20:39:17.636184   72973 ssh_runner.go:195] Run: which crictl
	I1009 20:39:17.640188   72973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:39:17.692106   72973 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:39:17.692194   72973 ssh_runner.go:195] Run: crio --version
	I1009 20:39:17.728643   72973 ssh_runner.go:195] Run: crio --version
	I1009 20:39:17.831038   72973 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:39:17.897038   72973 main.go:141] libmachine: (calico-665212) Calling .GetIP
	I1009 20:39:17.900071   72973 main.go:141] libmachine: (calico-665212) DBG | domain calico-665212 has defined MAC address 52:54:00:fb:46:3d in network mk-calico-665212
	I1009 20:39:17.900463   72973 main.go:141] libmachine: (calico-665212) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:46:3d", ip: ""} in network mk-calico-665212: {Iface:virbr3 ExpiryTime:2024-10-09 21:39:03 +0000 UTC Type:0 Mac:52:54:00:fb:46:3d Iaid: IPaddr:192.168.61.246 Prefix:24 Hostname:calico-665212 Clientid:01:52:54:00:fb:46:3d}
	I1009 20:39:17.900501   72973 main.go:141] libmachine: (calico-665212) DBG | domain calico-665212 has defined IP address 192.168.61.246 and MAC address 52:54:00:fb:46:3d in network mk-calico-665212
	I1009 20:39:17.900643   72973 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:39:17.905245   72973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:39:17.919509   72973 kubeadm.go:883] updating cluster {Name:calico-665212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:calico-665212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:39:17.919637   72973 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:39:17.919728   72973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:39:17.953357   72973 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:39:17.953434   72973 ssh_runner.go:195] Run: which lz4
	I1009 20:39:17.957915   72973 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:39:17.962448   72973 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:39:17.962479   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:39:19.461975   72973 crio.go:462] duration metric: took 1.504091808s to copy over tarball
	I1009 20:39:19.462054   72973 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:39:16.718068   72075 node_ready.go:53] node "kindnet-665212" has status "Ready":"False"
	I1009 20:39:19.217919   72075 node_ready.go:53] node "kindnet-665212" has status "Ready":"False"
	I1009 20:39:21.218213   72075 node_ready.go:53] node "kindnet-665212" has status "Ready":"False"
	I1009 20:39:16.551543   74576 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1009 20:39:16.551728   74576 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:39:16.551783   74576 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:39:16.568292   74576 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I1009 20:39:16.568817   74576 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:39:16.569423   74576 main.go:141] libmachine: Using API Version  1
	I1009 20:39:16.569483   74576 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:39:16.569830   74576 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:39:16.570071   74576 main.go:141] libmachine: (custom-flannel-665212) Calling .GetMachineName
	I1009 20:39:16.570238   74576 main.go:141] libmachine: (custom-flannel-665212) Calling .DriverName
	I1009 20:39:16.570422   74576 start.go:159] libmachine.API.Create for "custom-flannel-665212" (driver="kvm2")
	I1009 20:39:16.570464   74576 client.go:168] LocalClient.Create starting
	I1009 20:39:16.570500   74576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 20:39:16.570545   74576 main.go:141] libmachine: Decoding PEM data...
	I1009 20:39:16.570564   74576 main.go:141] libmachine: Parsing certificate...
	I1009 20:39:16.570625   74576 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 20:39:16.570660   74576 main.go:141] libmachine: Decoding PEM data...
	I1009 20:39:16.570675   74576 main.go:141] libmachine: Parsing certificate...
	I1009 20:39:16.570701   74576 main.go:141] libmachine: Running pre-create checks...
	I1009 20:39:16.570714   74576 main.go:141] libmachine: (custom-flannel-665212) Calling .PreCreateCheck
	I1009 20:39:16.571084   74576 main.go:141] libmachine: (custom-flannel-665212) Calling .GetConfigRaw
	I1009 20:39:16.571586   74576 main.go:141] libmachine: Creating machine...
	I1009 20:39:16.571605   74576 main.go:141] libmachine: (custom-flannel-665212) Calling .Create
	I1009 20:39:16.571741   74576 main.go:141] libmachine: (custom-flannel-665212) Creating KVM machine...
	I1009 20:39:16.573057   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | found existing default KVM network
	I1009 20:39:16.574483   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:16.574320   74599 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002117f0}
	I1009 20:39:16.574510   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | created network xml: 
	I1009 20:39:16.574521   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | <network>
	I1009 20:39:16.574529   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |   <name>mk-custom-flannel-665212</name>
	I1009 20:39:16.574541   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |   <dns enable='no'/>
	I1009 20:39:16.574548   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |   
	I1009 20:39:16.574561   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1009 20:39:16.574571   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |     <dhcp>
	I1009 20:39:16.574600   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1009 20:39:16.574618   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |     </dhcp>
	I1009 20:39:16.574630   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |   </ip>
	I1009 20:39:16.574642   74576 main.go:141] libmachine: (custom-flannel-665212) DBG |   
	I1009 20:39:16.574669   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | </network>
	I1009 20:39:16.574696   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | 
	I1009 20:39:16.580110   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | trying to create private KVM network mk-custom-flannel-665212 192.168.39.0/24...
	I1009 20:39:16.653639   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | private KVM network mk-custom-flannel-665212 192.168.39.0/24 created
	I1009 20:39:16.653680   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:16.653499   74599 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:39:16.653694   74576 main.go:141] libmachine: (custom-flannel-665212) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/custom-flannel-665212 ...
	I1009 20:39:16.653714   74576 main.go:141] libmachine: (custom-flannel-665212) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 20:39:16.653747   74576 main.go:141] libmachine: (custom-flannel-665212) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 20:39:16.901727   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:16.901603   74599 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/custom-flannel-665212/id_rsa...
	I1009 20:39:17.282693   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:17.282589   74599 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/custom-flannel-665212/custom-flannel-665212.rawdisk...
	I1009 20:39:17.282721   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Writing magic tar header
	I1009 20:39:17.282758   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Writing SSH key tar header
	I1009 20:39:17.282782   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:17.282699   74599 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/custom-flannel-665212 ...
	I1009 20:39:17.282833   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/custom-flannel-665212
	I1009 20:39:17.282862   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 20:39:17.282884   74576 main.go:141] libmachine: (custom-flannel-665212) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/custom-flannel-665212 (perms=drwx------)
	I1009 20:39:17.282895   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:39:17.282913   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 20:39:17.282925   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 20:39:17.282935   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Checking permissions on dir: /home/jenkins
	I1009 20:39:17.282945   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Checking permissions on dir: /home
	I1009 20:39:17.282963   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | Skipping /home - not owner
	I1009 20:39:17.282979   74576 main.go:141] libmachine: (custom-flannel-665212) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 20:39:17.282997   74576 main.go:141] libmachine: (custom-flannel-665212) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 20:39:17.283014   74576 main.go:141] libmachine: (custom-flannel-665212) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 20:39:17.283028   74576 main.go:141] libmachine: (custom-flannel-665212) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 20:39:17.283039   74576 main.go:141] libmachine: (custom-flannel-665212) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 20:39:17.283164   74576 main.go:141] libmachine: (custom-flannel-665212) Creating domain...
	I1009 20:39:17.284196   74576 main.go:141] libmachine: (custom-flannel-665212) define libvirt domain using xml: 
	I1009 20:39:17.284219   74576 main.go:141] libmachine: (custom-flannel-665212) <domain type='kvm'>
	I1009 20:39:17.284229   74576 main.go:141] libmachine: (custom-flannel-665212)   <name>custom-flannel-665212</name>
	I1009 20:39:17.284236   74576 main.go:141] libmachine: (custom-flannel-665212)   <memory unit='MiB'>3072</memory>
	I1009 20:39:17.284244   74576 main.go:141] libmachine: (custom-flannel-665212)   <vcpu>2</vcpu>
	I1009 20:39:17.284250   74576 main.go:141] libmachine: (custom-flannel-665212)   <features>
	I1009 20:39:17.284261   74576 main.go:141] libmachine: (custom-flannel-665212)     <acpi/>
	I1009 20:39:17.284268   74576 main.go:141] libmachine: (custom-flannel-665212)     <apic/>
	I1009 20:39:17.284279   74576 main.go:141] libmachine: (custom-flannel-665212)     <pae/>
	I1009 20:39:17.284300   74576 main.go:141] libmachine: (custom-flannel-665212)     
	I1009 20:39:17.284308   74576 main.go:141] libmachine: (custom-flannel-665212)   </features>
	I1009 20:39:17.284314   74576 main.go:141] libmachine: (custom-flannel-665212)   <cpu mode='host-passthrough'>
	I1009 20:39:17.284320   74576 main.go:141] libmachine: (custom-flannel-665212)   
	I1009 20:39:17.284328   74576 main.go:141] libmachine: (custom-flannel-665212)   </cpu>
	I1009 20:39:17.284337   74576 main.go:141] libmachine: (custom-flannel-665212)   <os>
	I1009 20:39:17.284346   74576 main.go:141] libmachine: (custom-flannel-665212)     <type>hvm</type>
	I1009 20:39:17.284355   74576 main.go:141] libmachine: (custom-flannel-665212)     <boot dev='cdrom'/>
	I1009 20:39:17.284364   74576 main.go:141] libmachine: (custom-flannel-665212)     <boot dev='hd'/>
	I1009 20:39:17.284373   74576 main.go:141] libmachine: (custom-flannel-665212)     <bootmenu enable='no'/>
	I1009 20:39:17.284381   74576 main.go:141] libmachine: (custom-flannel-665212)   </os>
	I1009 20:39:17.284390   74576 main.go:141] libmachine: (custom-flannel-665212)   <devices>
	I1009 20:39:17.284434   74576 main.go:141] libmachine: (custom-flannel-665212)     <disk type='file' device='cdrom'>
	I1009 20:39:17.284460   74576 main.go:141] libmachine: (custom-flannel-665212)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/custom-flannel-665212/boot2docker.iso'/>
	I1009 20:39:17.284483   74576 main.go:141] libmachine: (custom-flannel-665212)       <target dev='hdc' bus='scsi'/>
	I1009 20:39:17.284504   74576 main.go:141] libmachine: (custom-flannel-665212)       <readonly/>
	I1009 20:39:17.284515   74576 main.go:141] libmachine: (custom-flannel-665212)     </disk>
	I1009 20:39:17.284528   74576 main.go:141] libmachine: (custom-flannel-665212)     <disk type='file' device='disk'>
	I1009 20:39:17.284541   74576 main.go:141] libmachine: (custom-flannel-665212)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 20:39:17.284556   74576 main.go:141] libmachine: (custom-flannel-665212)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/custom-flannel-665212/custom-flannel-665212.rawdisk'/>
	I1009 20:39:17.284576   74576 main.go:141] libmachine: (custom-flannel-665212)       <target dev='hda' bus='virtio'/>
	I1009 20:39:17.284583   74576 main.go:141] libmachine: (custom-flannel-665212)     </disk>
	I1009 20:39:17.284597   74576 main.go:141] libmachine: (custom-flannel-665212)     <interface type='network'>
	I1009 20:39:17.284625   74576 main.go:141] libmachine: (custom-flannel-665212)       <source network='mk-custom-flannel-665212'/>
	I1009 20:39:17.284641   74576 main.go:141] libmachine: (custom-flannel-665212)       <model type='virtio'/>
	I1009 20:39:17.284648   74576 main.go:141] libmachine: (custom-flannel-665212)     </interface>
	I1009 20:39:17.284657   74576 main.go:141] libmachine: (custom-flannel-665212)     <interface type='network'>
	I1009 20:39:17.284668   74576 main.go:141] libmachine: (custom-flannel-665212)       <source network='default'/>
	I1009 20:39:17.284680   74576 main.go:141] libmachine: (custom-flannel-665212)       <model type='virtio'/>
	I1009 20:39:17.284690   74576 main.go:141] libmachine: (custom-flannel-665212)     </interface>
	I1009 20:39:17.284700   74576 main.go:141] libmachine: (custom-flannel-665212)     <serial type='pty'>
	I1009 20:39:17.284710   74576 main.go:141] libmachine: (custom-flannel-665212)       <target port='0'/>
	I1009 20:39:17.284720   74576 main.go:141] libmachine: (custom-flannel-665212)     </serial>
	I1009 20:39:17.284738   74576 main.go:141] libmachine: (custom-flannel-665212)     <console type='pty'>
	I1009 20:39:17.284750   74576 main.go:141] libmachine: (custom-flannel-665212)       <target type='serial' port='0'/>
	I1009 20:39:17.284760   74576 main.go:141] libmachine: (custom-flannel-665212)     </console>
	I1009 20:39:17.284770   74576 main.go:141] libmachine: (custom-flannel-665212)     <rng model='virtio'>
	I1009 20:39:17.284781   74576 main.go:141] libmachine: (custom-flannel-665212)       <backend model='random'>/dev/random</backend>
	I1009 20:39:17.284793   74576 main.go:141] libmachine: (custom-flannel-665212)     </rng>
	I1009 20:39:17.284813   74576 main.go:141] libmachine: (custom-flannel-665212)     
	I1009 20:39:17.284823   74576 main.go:141] libmachine: (custom-flannel-665212)     
	I1009 20:39:17.284830   74576 main.go:141] libmachine: (custom-flannel-665212)   </devices>
	I1009 20:39:17.284842   74576 main.go:141] libmachine: (custom-flannel-665212) </domain>
	I1009 20:39:17.284849   74576 main.go:141] libmachine: (custom-flannel-665212) 
	I1009 20:39:17.289004   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:a7:ca:54 in network default
	I1009 20:39:17.289575   74576 main.go:141] libmachine: (custom-flannel-665212) Ensuring networks are active...
	I1009 20:39:17.289596   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:17.290364   74576 main.go:141] libmachine: (custom-flannel-665212) Ensuring network default is active
	I1009 20:39:17.290719   74576 main.go:141] libmachine: (custom-flannel-665212) Ensuring network mk-custom-flannel-665212 is active
	I1009 20:39:17.291416   74576 main.go:141] libmachine: (custom-flannel-665212) Getting domain xml...
	I1009 20:39:17.292262   74576 main.go:141] libmachine: (custom-flannel-665212) Creating domain...
	I1009 20:39:18.919543   74576 main.go:141] libmachine: (custom-flannel-665212) Waiting to get IP...
	I1009 20:39:18.920531   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:18.921992   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:18.922016   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:18.921971   74599 retry.go:31] will retry after 288.811319ms: waiting for machine to come up
	I1009 20:39:19.212793   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:19.213317   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:19.213361   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:19.213264   74599 retry.go:31] will retry after 338.092899ms: waiting for machine to come up
	I1009 20:39:19.552703   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:19.553277   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:19.553304   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:19.553208   74599 retry.go:31] will retry after 364.884628ms: waiting for machine to come up
	I1009 20:39:19.919954   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:19.920455   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:19.920477   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:19.920407   74599 retry.go:31] will retry after 556.698252ms: waiting for machine to come up
	I1009 20:39:20.479360   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:20.479898   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:20.479920   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:20.479869   74599 retry.go:31] will retry after 463.472371ms: waiting for machine to come up
	I1009 20:39:20.945059   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:20.945725   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:20.945751   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:20.945632   74599 retry.go:31] will retry after 713.184503ms: waiting for machine to come up
	I1009 20:39:21.753781   72973 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.291698056s)
	I1009 20:39:21.753815   72973 crio.go:469] duration metric: took 2.291807316s to extract the tarball
	I1009 20:39:21.753831   72973 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:39:21.792408   72973 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:39:21.837423   72973 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:39:21.837443   72973 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:39:21.837450   72973 kubeadm.go:934] updating node { 192.168.61.246 8443 v1.31.1 crio true true} ...
	I1009 20:39:21.837538   72973 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-665212 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:calico-665212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1009 20:39:21.837604   72973 ssh_runner.go:195] Run: crio config
	I1009 20:39:21.889318   72973 cni.go:84] Creating CNI manager for "calico"
	I1009 20:39:21.889342   72973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:39:21.889364   72973 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.246 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-665212 NodeName:calico-665212 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.246"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.246 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:39:21.889477   72973 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.246
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-665212"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.246
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.246"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:39:21.889529   72973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:39:21.899752   72973 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:39:21.899820   72973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:39:21.909515   72973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1009 20:39:21.929053   72973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:39:21.945404   72973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I1009 20:39:21.961356   72973 ssh_runner.go:195] Run: grep 192.168.61.246	control-plane.minikube.internal$ /etc/hosts
	I1009 20:39:21.964965   72973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.246	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:39:21.977200   72973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:39:22.104311   72973 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:39:22.122665   72973 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212 for IP: 192.168.61.246
	I1009 20:39:22.122687   72973 certs.go:194] generating shared ca certs ...
	I1009 20:39:22.122701   72973 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:39:22.122838   72973 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:39:22.122898   72973 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:39:22.122907   72973 certs.go:256] generating profile certs ...
	I1009 20:39:22.122954   72973 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/client.key
	I1009 20:39:22.122966   72973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/client.crt with IP's: []
	I1009 20:39:22.219806   72973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/client.crt ...
	I1009 20:39:22.219832   72973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/client.crt: {Name:mk342269c5f6800bae452de8f08291c5ed20bc56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:39:22.219992   72973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/client.key ...
	I1009 20:39:22.220002   72973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/client.key: {Name:mk6241472668304a9efdd765fd24ef01f6fb8a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:39:22.220074   72973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.key.2b243a29
	I1009 20:39:22.220087   72973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.crt.2b243a29 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.246]
	I1009 20:39:22.341945   72973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.crt.2b243a29 ...
	I1009 20:39:22.341971   72973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.crt.2b243a29: {Name:mk9ec2634af33af03543e0d4f24f261094a6df86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:39:22.342133   72973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.key.2b243a29 ...
	I1009 20:39:22.342147   72973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.key.2b243a29: {Name:mkb999f971ab0d31e539c43bf62993367235c8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:39:22.342224   72973 certs.go:381] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.crt.2b243a29 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.crt
	I1009 20:39:22.342306   72973 certs.go:385] copying /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.key.2b243a29 -> /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.key
	I1009 20:39:22.342361   72973 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/proxy-client.key
	I1009 20:39:22.342375   72973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/proxy-client.crt with IP's: []
	I1009 20:39:22.433046   72973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/proxy-client.crt ...
	I1009 20:39:22.433073   72973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/proxy-client.crt: {Name:mk2bd9e829b4bd5a632ca1b8b4b23cee4eeb05bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:39:22.433232   72973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/proxy-client.key ...
	I1009 20:39:22.433243   72973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/proxy-client.key: {Name:mkc5a7bc29ebe8140437795e006c7d2fcffc05a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:39:22.433408   72973 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:39:22.433442   72973 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:39:22.433459   72973 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:39:22.433481   72973 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:39:22.433503   72973 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:39:22.433524   72973 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:39:22.433559   72973 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:39:22.434081   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:39:22.460674   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:39:22.486431   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:39:22.511320   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:39:22.538592   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 20:39:22.562655   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:39:22.589862   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:39:22.623032   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/calico-665212/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:39:22.652514   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:39:22.678482   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:39:22.705544   72973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:39:22.735401   72973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:39:22.753462   72973 ssh_runner.go:195] Run: openssl version
	I1009 20:39:22.759392   72973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:39:22.770841   72973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:39:22.777018   72973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:39:22.777082   72973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:39:22.785000   72973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:39:22.799547   72973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:39:22.816332   72973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:39:22.823888   72973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:39:22.823966   72973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:39:22.836353   72973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:39:22.856883   72973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:39:22.872999   72973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:39:22.878870   72973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:39:22.878934   72973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:39:22.885055   72973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:39:22.896141   72973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:39:22.900472   72973 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 20:39:22.900529   72973 kubeadm.go:392] StartCluster: {Name:calico-665212 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:calico-665212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.246 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:39:22.900618   72973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:39:22.900667   72973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:39:22.940472   72973 cri.go:89] found id: ""
	I1009 20:39:22.940548   72973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:39:22.951072   72973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:39:22.961041   72973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:39:22.971043   72973 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:39:22.971057   72973 kubeadm.go:157] found existing configuration files:
	
	I1009 20:39:22.971117   72973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:39:22.980538   72973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:39:22.980617   72973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:39:22.990284   72973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:39:22.999490   72973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:39:22.999551   72973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:39:23.012264   72973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:39:23.024274   72973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:39:23.024349   72973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:39:23.034391   72973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:39:23.043472   72973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:39:23.043540   72973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:39:23.052972   72973 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:39:23.110196   72973 kubeadm.go:310] W1009 20:39:23.095439     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:39:23.110951   72973 kubeadm.go:310] W1009 20:39:23.096369     833 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:39:23.236429   72973 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:39:23.343586   72075 node_ready.go:53] node "kindnet-665212" has status "Ready":"False"
	I1009 20:39:25.717352   72075 node_ready.go:49] node "kindnet-665212" has status "Ready":"True"
	I1009 20:39:25.717378   72075 node_ready.go:38] duration metric: took 15.50367468s for node "kindnet-665212" to be "Ready" ...
	I1009 20:39:25.717390   72075 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:39:25.726175   72075 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-j25bq" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:21.660106   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:21.660572   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:21.660610   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:21.660527   74599 retry.go:31] will retry after 934.907609ms: waiting for machine to come up
	I1009 20:39:22.597362   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:22.597867   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:22.597891   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:22.597808   74599 retry.go:31] will retry after 935.044897ms: waiting for machine to come up
	I1009 20:39:23.534354   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:23.534919   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:23.534946   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:23.534844   74599 retry.go:31] will retry after 1.409008654s: waiting for machine to come up
	I1009 20:39:24.945324   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | domain custom-flannel-665212 has defined MAC address 52:54:00:d6:53:d1 in network mk-custom-flannel-665212
	I1009 20:39:24.945795   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | unable to find current IP address of domain custom-flannel-665212 in network mk-custom-flannel-665212
	I1009 20:39:24.945838   74576 main.go:141] libmachine: (custom-flannel-665212) DBG | I1009 20:39:24.945751   74599 retry.go:31] will retry after 1.869256604s: waiting for machine to come up
	I1009 20:39:27.232438   72075 pod_ready.go:93] pod "coredns-7c65d6cfc9-j25bq" in "kube-system" namespace has status "Ready":"True"
	I1009 20:39:27.232469   72075 pod_ready.go:82] duration metric: took 1.506266643s for pod "coredns-7c65d6cfc9-j25bq" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.232483   72075 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-665212" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.237168   72075 pod_ready.go:93] pod "etcd-kindnet-665212" in "kube-system" namespace has status "Ready":"True"
	I1009 20:39:27.237191   72075 pod_ready.go:82] duration metric: took 4.700494ms for pod "etcd-kindnet-665212" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.237208   72075 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-665212" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.244008   72075 pod_ready.go:93] pod "kube-apiserver-kindnet-665212" in "kube-system" namespace has status "Ready":"True"
	I1009 20:39:27.244026   72075 pod_ready.go:82] duration metric: took 6.811564ms for pod "kube-apiserver-kindnet-665212" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.244036   72075 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-665212" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.249730   72075 pod_ready.go:93] pod "kube-controller-manager-kindnet-665212" in "kube-system" namespace has status "Ready":"True"
	I1009 20:39:27.249750   72075 pod_ready.go:82] duration metric: took 5.70833ms for pod "kube-controller-manager-kindnet-665212" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.249760   72075 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-8pc6x" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.317967   72075 pod_ready.go:93] pod "kube-proxy-8pc6x" in "kube-system" namespace has status "Ready":"True"
	I1009 20:39:27.318000   72075 pod_ready.go:82] duration metric: took 68.229838ms for pod "kube-proxy-8pc6x" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.318012   72075 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-665212" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.719004   72075 pod_ready.go:93] pod "kube-scheduler-kindnet-665212" in "kube-system" namespace has status "Ready":"True"
	I1009 20:39:27.719035   72075 pod_ready.go:82] duration metric: took 401.014165ms for pod "kube-scheduler-kindnet-665212" in "kube-system" namespace to be "Ready" ...
	I1009 20:39:27.719049   72075 pod_ready.go:39] duration metric: took 2.001634695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:39:27.719089   72075 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:39:27.719147   72075 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:39:27.737363   72075 api_server.go:72] duration metric: took 18.493786412s to wait for apiserver process to appear ...
	I1009 20:39:27.737392   72075 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:39:27.737415   72075 api_server.go:253] Checking apiserver healthz at https://192.168.50.85:8443/healthz ...
	I1009 20:39:27.742408   72075 api_server.go:279] https://192.168.50.85:8443/healthz returned 200:
	ok
	I1009 20:39:27.743788   72075 api_server.go:141] control plane version: v1.31.1
	I1009 20:39:27.743814   72075 api_server.go:131] duration metric: took 6.414283ms to wait for apiserver health ...
	I1009 20:39:27.743824   72075 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:39:27.922756   72075 system_pods.go:59] 8 kube-system pods found
	I1009 20:39:27.922875   72075 system_pods.go:61] "coredns-7c65d6cfc9-j25bq" [b4782f08-5052-413a-92eb-3d18a2b854df] Running
	I1009 20:39:27.922896   72075 system_pods.go:61] "etcd-kindnet-665212" [20ffcff4-2211-4a29-8a73-72e36b29fc9e] Running
	I1009 20:39:27.922919   72075 system_pods.go:61] "kindnet-lx2fn" [970081a8-f5ab-479a-a0cb-bc54f2230e97] Running
	I1009 20:39:27.922954   72075 system_pods.go:61] "kube-apiserver-kindnet-665212" [8f201913-e10d-4736-9074-0603af8cea96] Running
	I1009 20:39:27.922969   72075 system_pods.go:61] "kube-controller-manager-kindnet-665212" [293bb5db-2bff-4a44-94ae-79f5b9529e8b] Running
	I1009 20:39:27.922984   72075 system_pods.go:61] "kube-proxy-8pc6x" [f201864d-ea88-4018-a574-6c3500007375] Running
	I1009 20:39:27.922998   72075 system_pods.go:61] "kube-scheduler-kindnet-665212" [f191fbe1-785b-4879-9f2b-46c0570ac1e0] Running
	I1009 20:39:27.923031   72075 system_pods.go:61] "storage-provisioner" [8b442bfb-aab7-4fee-a2e7-106ef9747c71] Running
	I1009 20:39:27.923078   72075 system_pods.go:74] duration metric: took 179.221818ms to wait for pod list to return data ...
	I1009 20:39:27.923192   72075 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:39:28.119575   72075 default_sa.go:45] found service account: "default"
	I1009 20:39:28.119609   72075 default_sa.go:55] duration metric: took 196.387757ms for default service account to be created ...
	I1009 20:39:28.119621   72075 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:39:28.322957   72075 system_pods.go:86] 8 kube-system pods found
	I1009 20:39:28.322991   72075 system_pods.go:89] "coredns-7c65d6cfc9-j25bq" [b4782f08-5052-413a-92eb-3d18a2b854df] Running
	I1009 20:39:28.322999   72075 system_pods.go:89] "etcd-kindnet-665212" [20ffcff4-2211-4a29-8a73-72e36b29fc9e] Running
	I1009 20:39:28.323005   72075 system_pods.go:89] "kindnet-lx2fn" [970081a8-f5ab-479a-a0cb-bc54f2230e97] Running
	I1009 20:39:28.323011   72075 system_pods.go:89] "kube-apiserver-kindnet-665212" [8f201913-e10d-4736-9074-0603af8cea96] Running
	I1009 20:39:28.323016   72075 system_pods.go:89] "kube-controller-manager-kindnet-665212" [293bb5db-2bff-4a44-94ae-79f5b9529e8b] Running
	I1009 20:39:28.323022   72075 system_pods.go:89] "kube-proxy-8pc6x" [f201864d-ea88-4018-a574-6c3500007375] Running
	I1009 20:39:28.323028   72075 system_pods.go:89] "kube-scheduler-kindnet-665212" [f191fbe1-785b-4879-9f2b-46c0570ac1e0] Running
	I1009 20:39:28.323033   72075 system_pods.go:89] "storage-provisioner" [8b442bfb-aab7-4fee-a2e7-106ef9747c71] Running
	I1009 20:39:28.323041   72075 system_pods.go:126] duration metric: took 203.413587ms to wait for k8s-apps to be running ...
	I1009 20:39:28.323052   72075 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:39:28.323123   72075 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:39:28.343313   72075 system_svc.go:56] duration metric: took 20.251985ms WaitForService to wait for kubelet
	I1009 20:39:28.343348   72075 kubeadm.go:582] duration metric: took 19.099774228s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:39:28.343369   72075 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:39:28.519255   72075 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:39:28.519293   72075 node_conditions.go:123] node cpu capacity is 2
	I1009 20:39:28.519309   72075 node_conditions.go:105] duration metric: took 175.934044ms to run NodePressure ...
	I1009 20:39:28.519324   72075 start.go:241] waiting for startup goroutines ...
	I1009 20:39:28.519338   72075 start.go:246] waiting for cluster config update ...
	I1009 20:39:28.519355   72075 start.go:255] writing updated cluster config ...
	I1009 20:39:28.519706   72075 ssh_runner.go:195] Run: rm -f paused
	I1009 20:39:28.586006   72075 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:39:28.587954   72075 out.go:177] * Done! kubectl is now configured to use "kindnet-665212" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.818995616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506370818961693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2152f442-1550-4cda-9b5d-d3fadf5859ff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.819895206Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da10ed19-a876-476c-ac9a-7637657b5b2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.819984484Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da10ed19-a876-476c-ac9a-7637657b5b2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.820254222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8,PodSandboxId:40f5ac310a4c0ce32cb1eabffc7264a7cf5be7fee8e68838ffc78615f4d700f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505341438040209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f34170-4cef-4daa-ad01-14999b6f1110,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce,PodSandboxId:6665a3fd51c65e54653a4bf7b0b49ec7ba2c915ab6ec08ccd449e3041a8c265b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341435449588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8x9ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e5e8e5-f679-486e-b1a5-69eb7b46d49e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49,PodSandboxId:e7f4ce8dc720cf38b016a2f42947591b6c4af46d3a04f49f2a79ba5c8f656644,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341373386799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6644x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f598059d-a036-45df-885c-95efd04424d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362,PodSandboxId:18e07ae69944d7aadc833f063a484c4395f6304fd5615f749f05e56551bd951f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728505341038628311,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6klwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb78cea4-6c44-4a04-a75b-6ed061c1ecdf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09,PodSandboxId:f0e11d5fdb6e958ab12bce3dc62c91896aa33321b341b21ed18038574aa4c6c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505329988574887,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b97277fc866172dab7127888d1f0d4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81,PodSandboxId:d8abb33ba7d58c299f7f974600d190688b525f97ea99d5c88740178e4bc19300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505329930949666,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4,PodSandboxId:5c17b81e750a10d77bb5ca9e48416c39d72d2946bd409ad82ee3603b5534118c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505329937549070,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b97f68f3fb4328203f939607b95a02d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188,PodSandboxId:ad7d44cd50ef0aed80b550452cb67df8df25a6a6563ff57a3268ee168df3082d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505329914119040,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bbcaffd799afafaf9cf6f5dbf86aa5c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719,PodSandboxId:deb32eb8f9eb4dffb8bf620693aff8e787a3d2addf4f3c90a70642021e38c604,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505042055642680,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da10ed19-a876-476c-ac9a-7637657b5b2d name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.860663400Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d637c0bf-232f-47d4-bd30-ee611643b907 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.860734360Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d637c0bf-232f-47d4-bd30-ee611643b907 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.862591522Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13735513-964a-4cca-ab75-02a30731245e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.863304167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506370863270822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13735513-964a-4cca-ab75-02a30731245e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.863991083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dafbffb9-3c43-49cf-93c4-4bd9d6e6d1af name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.864059083Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dafbffb9-3c43-49cf-93c4-4bd9d6e6d1af name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.864269536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8,PodSandboxId:40f5ac310a4c0ce32cb1eabffc7264a7cf5be7fee8e68838ffc78615f4d700f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505341438040209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f34170-4cef-4daa-ad01-14999b6f1110,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce,PodSandboxId:6665a3fd51c65e54653a4bf7b0b49ec7ba2c915ab6ec08ccd449e3041a8c265b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341435449588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8x9ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e5e8e5-f679-486e-b1a5-69eb7b46d49e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49,PodSandboxId:e7f4ce8dc720cf38b016a2f42947591b6c4af46d3a04f49f2a79ba5c8f656644,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341373386799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6644x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f598059d-a036-45df-885c-95efd04424d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362,PodSandboxId:18e07ae69944d7aadc833f063a484c4395f6304fd5615f749f05e56551bd951f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728505341038628311,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6klwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb78cea4-6c44-4a04-a75b-6ed061c1ecdf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09,PodSandboxId:f0e11d5fdb6e958ab12bce3dc62c91896aa33321b341b21ed18038574aa4c6c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505329988574887,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b97277fc866172dab7127888d1f0d4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81,PodSandboxId:d8abb33ba7d58c299f7f974600d190688b525f97ea99d5c88740178e4bc19300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505329930949666,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4,PodSandboxId:5c17b81e750a10d77bb5ca9e48416c39d72d2946bd409ad82ee3603b5534118c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505329937549070,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b97f68f3fb4328203f939607b95a02d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188,PodSandboxId:ad7d44cd50ef0aed80b550452cb67df8df25a6a6563ff57a3268ee168df3082d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505329914119040,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bbcaffd799afafaf9cf6f5dbf86aa5c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719,PodSandboxId:deb32eb8f9eb4dffb8bf620693aff8e787a3d2addf4f3c90a70642021e38c604,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505042055642680,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dafbffb9-3c43-49cf-93c4-4bd9d6e6d1af name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.908250574Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=941c9aad-0895-461b-8b09-9d624082c137 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.908349928Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=941c9aad-0895-461b-8b09-9d624082c137 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.909970250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5ad41600-fece-4aef-bb46-96a27cba9d38 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.910559019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506370910529480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5ad41600-fece-4aef-bb46-96a27cba9d38 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.911422174Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f87c0e0f-44f8-46ca-ac36-c40a12659153 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.911510897Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f87c0e0f-44f8-46ca-ac36-c40a12659153 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.911860428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8,PodSandboxId:40f5ac310a4c0ce32cb1eabffc7264a7cf5be7fee8e68838ffc78615f4d700f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505341438040209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f34170-4cef-4daa-ad01-14999b6f1110,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce,PodSandboxId:6665a3fd51c65e54653a4bf7b0b49ec7ba2c915ab6ec08ccd449e3041a8c265b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341435449588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8x9ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e5e8e5-f679-486e-b1a5-69eb7b46d49e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49,PodSandboxId:e7f4ce8dc720cf38b016a2f42947591b6c4af46d3a04f49f2a79ba5c8f656644,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341373386799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6644x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f598059d-a036-45df-885c-95efd04424d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362,PodSandboxId:18e07ae69944d7aadc833f063a484c4395f6304fd5615f749f05e56551bd951f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728505341038628311,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6klwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb78cea4-6c44-4a04-a75b-6ed061c1ecdf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09,PodSandboxId:f0e11d5fdb6e958ab12bce3dc62c91896aa33321b341b21ed18038574aa4c6c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505329988574887,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b97277fc866172dab7127888d1f0d4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81,PodSandboxId:d8abb33ba7d58c299f7f974600d190688b525f97ea99d5c88740178e4bc19300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505329930949666,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4,PodSandboxId:5c17b81e750a10d77bb5ca9e48416c39d72d2946bd409ad82ee3603b5534118c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505329937549070,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b97f68f3fb4328203f939607b95a02d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188,PodSandboxId:ad7d44cd50ef0aed80b550452cb67df8df25a6a6563ff57a3268ee168df3082d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505329914119040,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bbcaffd799afafaf9cf6f5dbf86aa5c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719,PodSandboxId:deb32eb8f9eb4dffb8bf620693aff8e787a3d2addf4f3c90a70642021e38c604,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505042055642680,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f87c0e0f-44f8-46ca-ac36-c40a12659153 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.944382381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f9ce9285-3c74-4c0d-a273-212eb8413d22 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.944455379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f9ce9285-3c74-4c0d-a273-212eb8413d22 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.945756025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7400d3f6-6a79-4c19-a98f-91bc56d517b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.946456362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506370946432261,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7400d3f6-6a79-4c19-a98f-91bc56d517b0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.946957420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=600706e2-0975-4160-8b0a-d8fdccf59cd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.947027835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=600706e2-0975-4160-8b0a-d8fdccf59cd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:39:30 default-k8s-diff-port-733270 crio[705]: time="2024-10-09 20:39:30.947229972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8,PodSandboxId:40f5ac310a4c0ce32cb1eabffc7264a7cf5be7fee8e68838ffc78615f4d700f0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505341438040209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90f34170-4cef-4daa-ad01-14999b6f1110,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce,PodSandboxId:6665a3fd51c65e54653a4bf7b0b49ec7ba2c915ab6ec08ccd449e3041a8c265b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341435449588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-8x9ns,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e5e8e5-f679-486e-b1a5-69eb7b46d49e,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49,PodSandboxId:e7f4ce8dc720cf38b016a2f42947591b6c4af46d3a04f49f2a79ba5c8f656644,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505341373386799,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6644x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: f598059d-a036-45df-885c-95efd04424d9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362,PodSandboxId:18e07ae69944d7aadc833f063a484c4395f6304fd5615f749f05e56551bd951f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1728505341038628311,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6klwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb78cea4-6c44-4a04-a75b-6ed061c1ecdf,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09,PodSandboxId:f0e11d5fdb6e958ab12bce3dc62c91896aa33321b341b21ed18038574aa4c6c7,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505329988574887,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b97277fc866172dab7127888d1f0d4ab,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81,PodSandboxId:d8abb33ba7d58c299f7f974600d190688b525f97ea99d5c88740178e4bc19300,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505329930949666,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4,PodSandboxId:5c17b81e750a10d77bb5ca9e48416c39d72d2946bd409ad82ee3603b5534118c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505329937549070,Labels:map[string]string{io.kube
rnetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b97f68f3fb4328203f939607b95a02d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188,PodSandboxId:ad7d44cd50ef0aed80b550452cb67df8df25a6a6563ff57a3268ee168df3082d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505329914119040,Labels:map[string]string{
io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4bbcaffd799afafaf9cf6f5dbf86aa5c,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719,PodSandboxId:deb32eb8f9eb4dffb8bf620693aff8e787a3d2addf4f3c90a70642021e38c604,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728505042055642680,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-733270,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d127d12dff327ffca2b85d17d7317e25,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=600706e2-0975-4160-8b0a-d8fdccf59cd4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	519150750d160       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   40f5ac310a4c0       storage-provisioner
	1599ceb30116f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   6665a3fd51c65       coredns-7c65d6cfc9-8x9ns
	be8ea22a44eb0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   e7f4ce8dc720c       coredns-7c65d6cfc9-6644x
	1a250c859008a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 minutes ago      Running             kube-proxy                0                   18e07ae69944d       kube-proxy-6klwf
	a67302292e06e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   f0e11d5fdb6e9       etcd-default-k8s-diff-port-733270
	b41b34d2a3dcd       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   17 minutes ago      Running             kube-scheduler            2                   5c17b81e750a1       kube-scheduler-default-k8s-diff-port-733270
	5fc33213e2fd8       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 minutes ago      Running             kube-apiserver            2                   d8abb33ba7d58       kube-apiserver-default-k8s-diff-port-733270
	4cb9d0e572902       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 minutes ago      Running             kube-controller-manager   2                   ad7d44cd50ef0       kube-controller-manager-default-k8s-diff-port-733270
	2419be48ef7ea       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 minutes ago      Exited              kube-apiserver            1                   deb32eb8f9eb4       kube-apiserver-default-k8s-diff-port-733270
	
	
	==> coredns [1599ceb30116fedc7de8e9c5633579345307320a232290fdea1302672f04e0ce] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [be8ea22a44eb0de35bc5687235e321826493f798958ad10921e965dbdbe86f49] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-733270
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-733270
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=default-k8s-diff-port-733270
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T20_22_15_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-733270
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:39:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:37:44 +0000   Wed, 09 Oct 2024 20:22:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:37:44 +0000   Wed, 09 Oct 2024 20:22:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:37:44 +0000   Wed, 09 Oct 2024 20:22:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:37:44 +0000   Wed, 09 Oct 2024 20:22:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.134
	  Hostname:    default-k8s-diff-port-733270
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c47bd50252314253803dbb053fca24c4
	  System UUID:                c47bd502-5231-4253-803d-bb053fca24c4
	  Boot ID:                    c11b6fae-9e1c-4543-9658-1fcfc30a47b1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6644x                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-8x9ns                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-733270                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-733270             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-733270    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-6klwf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-733270             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-srjrs                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node default-k8s-diff-port-733270 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-733270 event: Registered Node default-k8s-diff-port-733270 in Controller
	
	
	==> dmesg <==
	[  +0.041503] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.990112] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.489351] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[Oct 9 20:17] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.080715] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.058439] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.061166] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.219655] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.117621] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +0.310828] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.131703] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +2.201156] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +0.062913] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.531482] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.583706] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.429409] kauditd_printk_skb: 26 callbacks suppressed
	[Oct 9 20:22] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.622050] systemd-fstab-generator[2540]: Ignoring "noauto" option for root device
	[  +4.969307] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.587071] systemd-fstab-generator[2864]: Ignoring "noauto" option for root device
	[  +4.864577] systemd-fstab-generator[2973]: Ignoring "noauto" option for root device
	[  +0.103872] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.522596] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [a67302292e06e08b387e71310aece90078da4e28b5445f0695105e0f880e0a09] <==
	{"level":"info","ts":"2024-10-09T20:22:10.698314Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b97e97327d189999","local-member-attributes":"{Name:default-k8s-diff-port-733270 ClientURLs:[https://192.168.72.134:2379]}","request-path":"/0/members/b97e97327d189999/attributes","cluster-id":"e05c7f9c7688aa0f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:22:10.705950Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:22:10.706211Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:22:10.713847Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:22:10.706407Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e05c7f9c7688aa0f","local-member-id":"b97e97327d189999","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:22:10.713980Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:22:10.714028Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-09T20:22:10.714525Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:22:10.718735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T20:32:11.109304Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":722}
	{"level":"info","ts":"2024-10-09T20:32:11.118397Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":722,"took":"8.421359ms","hash":2270665808,"current-db-size-bytes":2244608,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2244608,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-10-09T20:32:11.118491Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2270665808,"revision":722,"compact-revision":-1}
	{"level":"info","ts":"2024-10-09T20:37:11.231947Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2024-10-09T20:37:11.238506Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":966,"took":"4.926592ms","hash":1189034051,"current-db-size-bytes":2244608,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1572864,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-09T20:37:11.238585Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1189034051,"revision":966,"compact-revision":722}
	{"level":"info","ts":"2024-10-09T20:38:08.792637Z","caller":"traceutil/trace.go:171","msg":"trace[670912325] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"246.682069ms","start":"2024-10-09T20:38:08.545904Z","end":"2024-10-09T20:38:08.792586Z","steps":["trace[670912325] 'process raft request'  (duration: 246.280713ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:38:29.358313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.686907ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11068038581627579352 > lease_revoke:<id:19999272f23aeb79>","response":"size:29"}
	{"level":"info","ts":"2024-10-09T20:38:44.246784Z","caller":"traceutil/trace.go:171","msg":"trace[813724180] transaction","detail":"{read_only:false; response_revision:1287; number_of_response:1; }","duration":"142.243821ms","start":"2024-10-09T20:38:44.104503Z","end":"2024-10-09T20:38:44.246747Z","steps":["trace[813724180] 'process raft request'  (duration: 121.247619ms)","trace[813724180] 'compare'  (duration: 20.836143ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:38:54.592397Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.130454ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11068038581627579499 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.134\" mod_revision:1287 > success:<request_put:<key:\"/registry/masterleases/192.168.72.134\" value_size:67 lease:1844666544772803688 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.134\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-09T20:38:54.592663Z","caller":"traceutil/trace.go:171","msg":"trace[1119374912] transaction","detail":"{read_only:false; response_revision:1296; number_of_response:1; }","duration":"254.271041ms","start":"2024-10-09T20:38:54.338380Z","end":"2024-10-09T20:38:54.592651Z","steps":["trace[1119374912] 'process raft request'  (duration: 129.343951ms)","trace[1119374912] 'compare'  (duration: 123.977447ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T20:39:23.617330Z","caller":"traceutil/trace.go:171","msg":"trace[1803311350] transaction","detail":"{read_only:false; response_revision:1319; number_of_response:1; }","duration":"241.720393ms","start":"2024-10-09T20:39:23.375587Z","end":"2024-10-09T20:39:23.617308Z","steps":["trace[1803311350] 'process raft request'  (duration: 241.386155ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:39:23.838791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.482826ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T20:39:23.839002Z","caller":"traceutil/trace.go:171","msg":"trace[1129931246] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1319; }","duration":"142.73585ms","start":"2024-10-09T20:39:23.696229Z","end":"2024-10-09T20:39:23.838965Z","steps":["trace[1129931246] 'range keys from in-memory index tree'  (duration: 142.417076ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:39:24.352760Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.714937ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11068038581627579678 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.134\" mod_revision:1312 > success:<request_put:<key:\"/registry/masterleases/192.168.72.134\" value_size:67 lease:1844666544772803868 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.134\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-09T20:39:24.353351Z","caller":"traceutil/trace.go:171","msg":"trace[1685432555] transaction","detail":"{read_only:false; response_revision:1320; number_of_response:1; }","duration":"256.146428ms","start":"2024-10-09T20:39:24.097182Z","end":"2024-10-09T20:39:24.353328Z","steps":["trace[1685432555] 'process raft request'  (duration: 125.689605ms)","trace[1685432555] 'compare'  (duration: 129.559602ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:39:31 up 22 min,  0 users,  load average: 0.09, 0.13, 0.09
	Linux default-k8s-diff-port-733270 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2419be48ef7ea3b623843f778bd4ae2015c61c1884542057bc057404edff2719] <==
	W1009 20:22:02.203505       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.203612       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.209051       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.230509       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.281046       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.387889       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.395338       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.410904       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.428375       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.475410       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.511455       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.546660       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.559050       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.591432       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.597044       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.613151       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.617975       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.697969       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.704484       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.728362       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.732870       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.821904       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.833495       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.863506       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1009 20:22:02.885071       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [5fc33213e2fd8726c28b7016a600f9039bb044e4a504a746896e3c2c7b4b2f81] <==
	I1009 20:35:13.471651       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:35:13.471736       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:37:12.471000       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:37:12.471481       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1009 20:37:13.473921       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:37:13.473991       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1009 20:37:13.474089       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:37:13.474171       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:37:13.475173       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:37:13.475281       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:38:13.476349       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:38:13.476507       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1009 20:38:13.476375       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:38:13.476587       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 20:38:13.477915       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:38:13.477950       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [4cb9d0e572902c43184ad23dfeea15829eaeb9ad3b7eef120a76aabe3d9c1188] <==
	E1009 20:34:19.514904       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:34:19.976659       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:34:49.521904       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:34:49.985099       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:35:19.528556       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:35:19.993105       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:35:49.534985       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:35:50.001416       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:36:19.541519       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:36:20.009019       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:36:49.548172       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:36:50.019109       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:37:19.554720       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:37:20.027038       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:37:44.948692       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-733270"
	E1009 20:37:49.561219       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:37:50.035391       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:38:19.567497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:38:20.043994       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:38:38.249988       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="284.907µs"
	E1009 20:38:49.577667       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:38:50.053452       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:38:50.248055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="301.518µs"
	E1009 20:39:19.585943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:39:20.062540       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [1a250c859008a117f4ed6d49c55b67c26f015ba6ce16c75f09b0b32db9ec0362] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:22:21.818439       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:22:21.844092       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.134"]
	E1009 20:22:21.844172       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:22:21.944554       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:22:21.944640       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:22:21.944675       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:22:21.947496       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:22:21.948070       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:22:21.948338       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:22:21.950059       1 config.go:199] "Starting service config controller"
	I1009 20:22:21.950116       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:22:21.950156       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:22:21.950172       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:22:21.951592       1 config.go:328] "Starting node config controller"
	I1009 20:22:21.951645       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:22:22.050680       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 20:22:22.050772       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:22:22.052251       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b41b34d2a3dcd58221aea4af23377e0c837cfa2bcb72fe9907dc6403e43bd5e4] <==
	W1009 20:22:12.514385       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:22:12.515775       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.337764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 20:22:13.337880       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.347168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 20:22:13.347233       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.371153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 20:22:13.371208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.391237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 20:22:13.391291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.415901       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 20:22:13.415992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.417082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 20:22:13.417203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.473085       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 20:22:13.473182       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.564656       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 20:22:13.564952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.603480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 20:22:13.603552       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.631209       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 20:22:13.631266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 20:22:13.704619       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 20:22:13.704673       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1009 20:22:16.705095       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:38:27 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:27.256287    2871 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 09 20:38:27 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:27.256777    2871 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmprd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-srjrs_kube-system(9fe02f22-4b36-4d68-bdf8-51d66609567a): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 09 20:38:27 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:27.258366    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:38:35 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:35.531553    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506315531162362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:38:35 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:35.531656    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506315531162362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:38:38 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:38.231268    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:38:45 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:45.532884    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506325532506868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:38:45 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:45.533008    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506325532506868,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:38:50 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:50.231695    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:38:55 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:55.535385    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506335534572425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:38:55 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:38:55.535445    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506335534572425,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:39:03 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:03.231527    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:39:05 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:05.541508    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506345540794867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:39:05 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:05.541556    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506345540794867,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:39:15 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:15.232595    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	Oct 09 20:39:15 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:15.276623    2871 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 20:39:15 default-k8s-diff-port-733270 kubelet[2871]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 20:39:15 default-k8s-diff-port-733270 kubelet[2871]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 20:39:15 default-k8s-diff-port-733270 kubelet[2871]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 20:39:15 default-k8s-diff-port-733270 kubelet[2871]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 20:39:15 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:15.548289    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506355547098899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:39:15 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:15.548361    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506355547098899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:39:25 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:25.549902    2871 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506365549580856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:39:25 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:25.549931    2871 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506365549580856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:39:30 default-k8s-diff-port-733270 kubelet[2871]: E1009 20:39:30.231881    2871 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-srjrs" podUID="9fe02f22-4b36-4d68-bdf8-51d66609567a"
	
	
	==> storage-provisioner [519150750d160b0ce7eb0b618bc0d56f9ea04f295d2a70ff097cd41231f126f8] <==
	I1009 20:22:21.730760       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:22:21.748389       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:22:21.748551       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:22:21.766392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:22:21.767060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4b3da48f-dde7-4ad2-82ca-0315dd56d005", APIVersion:"v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-733270_82667a2d-f280-4aa8-addc-ccb916c29dc4 became leader
	I1009 20:22:21.769429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733270_82667a2d-f280-4aa8-addc-ccb916c29dc4!
	I1009 20:22:21.870671       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-733270_82667a2d-f280-4aa8-addc-ccb916c29dc4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-733270 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-srjrs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-733270 describe pod metrics-server-6867b74b74-srjrs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-733270 describe pod metrics-server-6867b74b74-srjrs: exit status 1 (71.250611ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-srjrs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-733270 describe pod metrics-server-6867b74b74-srjrs: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (479.46s)
E1009 20:41:04.168619   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:41:06.924795   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (347.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480205 -n no-preload-480205
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-09 20:37:33.061294418 +0000 UTC m=+6664.051067334
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-480205 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-480205 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.597µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-480205 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-480205 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-480205 logs -n 25: (1.238309578s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-615869 sudo                            | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-615869                                 | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:08 UTC |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-480205             | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-324052 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | disable-driver-mounts-324052                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:10 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503330            | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC | 09 Oct 24 20:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-733270  | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-480205                  | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169021        | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-503330                 | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-733270       | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:22 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169021             | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:36 UTC | 09 Oct 24 20:36 UTC |
	| start   | -p newest-cni-203991 --memory=2200 --alsologtostderr   | newest-cni-203991            | jenkins | v1.34.0 | 09 Oct 24 20:36 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:36:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:36:59.343183   70845 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:36:59.343441   70845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:36:59.343451   70845 out.go:358] Setting ErrFile to fd 2...
	I1009 20:36:59.343455   70845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:36:59.343620   70845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:36:59.344157   70845 out.go:352] Setting JSON to false
	I1009 20:36:59.345069   70845 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8360,"bootTime":1728497859,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:36:59.345155   70845 start.go:139] virtualization: kvm guest
	I1009 20:36:59.347512   70845 out.go:177] * [newest-cni-203991] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:36:59.349067   70845 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:36:59.349076   70845 notify.go:220] Checking for updates...
	I1009 20:36:59.352032   70845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:36:59.353435   70845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:36:59.354706   70845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:36:59.355931   70845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:36:59.357237   70845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:36:59.358964   70845 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:36:59.359124   70845 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:36:59.359276   70845 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:36:59.359391   70845 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:36:59.396079   70845 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 20:36:59.397352   70845 start.go:297] selected driver: kvm2
	I1009 20:36:59.397367   70845 start.go:901] validating driver "kvm2" against <nil>
	I1009 20:36:59.397382   70845 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:36:59.398320   70845 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:36:59.398425   70845 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:36:59.414097   70845 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:36:59.414150   70845 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1009 20:36:59.414217   70845 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1009 20:36:59.414542   70845 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1009 20:36:59.414587   70845 cni.go:84] Creating CNI manager for ""
	I1009 20:36:59.414646   70845 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:36:59.414656   70845 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 20:36:59.414732   70845 start.go:340] cluster config:
	{Name:newest-cni-203991 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-203991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:36:59.414872   70845 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:36:59.416842   70845 out.go:177] * Starting "newest-cni-203991" primary control-plane node in "newest-cni-203991" cluster
	I1009 20:36:59.418004   70845 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:36:59.418040   70845 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 20:36:59.418053   70845 cache.go:56] Caching tarball of preloaded images
	I1009 20:36:59.418134   70845 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:36:59.418152   70845 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 20:36:59.418229   70845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/newest-cni-203991/config.json ...
	I1009 20:36:59.418247   70845 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/newest-cni-203991/config.json: {Name:mk89297723f31e087be85f34d1b37d9de34e0550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:36:59.418369   70845 start.go:360] acquireMachinesLock for newest-cni-203991: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:36:59.418395   70845 start.go:364] duration metric: took 14.022µs to acquireMachinesLock for "newest-cni-203991"
	I1009 20:36:59.418410   70845 start.go:93] Provisioning new machine with config: &{Name:newest-cni-203991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-203991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:36:59.418508   70845 start.go:125] createHost starting for "" (driver="kvm2")
	I1009 20:36:59.420668   70845 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1009 20:36:59.420816   70845 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:36:59.420866   70845 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:36:59.436351   70845 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43045
	I1009 20:36:59.436871   70845 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:36:59.437388   70845 main.go:141] libmachine: Using API Version  1
	I1009 20:36:59.437411   70845 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:36:59.437690   70845 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:36:59.437800   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetMachineName
	I1009 20:36:59.437956   70845 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:36:59.438091   70845 start.go:159] libmachine.API.Create for "newest-cni-203991" (driver="kvm2")
	I1009 20:36:59.438120   70845 client.go:168] LocalClient.Create starting
	I1009 20:36:59.438149   70845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem
	I1009 20:36:59.438179   70845 main.go:141] libmachine: Decoding PEM data...
	I1009 20:36:59.438198   70845 main.go:141] libmachine: Parsing certificate...
	I1009 20:36:59.438249   70845 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem
	I1009 20:36:59.438267   70845 main.go:141] libmachine: Decoding PEM data...
	I1009 20:36:59.438276   70845 main.go:141] libmachine: Parsing certificate...
	I1009 20:36:59.438291   70845 main.go:141] libmachine: Running pre-create checks...
	I1009 20:36:59.438300   70845 main.go:141] libmachine: (newest-cni-203991) Calling .PreCreateCheck
	I1009 20:36:59.438655   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetConfigRaw
	I1009 20:36:59.439029   70845 main.go:141] libmachine: Creating machine...
	I1009 20:36:59.439044   70845 main.go:141] libmachine: (newest-cni-203991) Calling .Create
	I1009 20:36:59.439195   70845 main.go:141] libmachine: (newest-cni-203991) Creating KVM machine...
	I1009 20:36:59.440561   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found existing default KVM network
	I1009 20:36:59.442013   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:36:59.441857   70869 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:2d:ac:b7} reservation:<nil>}
	I1009 20:36:59.442801   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:36:59.442740   70869 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:30:e3:eb} reservation:<nil>}
	I1009 20:36:59.443927   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:36:59.443846   70869 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000322fc0}
	I1009 20:36:59.443948   70845 main.go:141] libmachine: (newest-cni-203991) DBG | created network xml: 
	I1009 20:36:59.443958   70845 main.go:141] libmachine: (newest-cni-203991) DBG | <network>
	I1009 20:36:59.443964   70845 main.go:141] libmachine: (newest-cni-203991) DBG |   <name>mk-newest-cni-203991</name>
	I1009 20:36:59.443970   70845 main.go:141] libmachine: (newest-cni-203991) DBG |   <dns enable='no'/>
	I1009 20:36:59.443975   70845 main.go:141] libmachine: (newest-cni-203991) DBG |   
	I1009 20:36:59.443985   70845 main.go:141] libmachine: (newest-cni-203991) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1009 20:36:59.443996   70845 main.go:141] libmachine: (newest-cni-203991) DBG |     <dhcp>
	I1009 20:36:59.444015   70845 main.go:141] libmachine: (newest-cni-203991) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1009 20:36:59.444025   70845 main.go:141] libmachine: (newest-cni-203991) DBG |     </dhcp>
	I1009 20:36:59.444033   70845 main.go:141] libmachine: (newest-cni-203991) DBG |   </ip>
	I1009 20:36:59.444049   70845 main.go:141] libmachine: (newest-cni-203991) DBG |   
	I1009 20:36:59.444061   70845 main.go:141] libmachine: (newest-cni-203991) DBG | </network>
	I1009 20:36:59.444070   70845 main.go:141] libmachine: (newest-cni-203991) DBG | 
	I1009 20:36:59.449194   70845 main.go:141] libmachine: (newest-cni-203991) DBG | trying to create private KVM network mk-newest-cni-203991 192.168.61.0/24...
	I1009 20:36:59.516627   70845 main.go:141] libmachine: (newest-cni-203991) DBG | private KVM network mk-newest-cni-203991 192.168.61.0/24 created
	I1009 20:36:59.516660   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:36:59.516589   70869 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:36:59.516674   70845 main.go:141] libmachine: (newest-cni-203991) Setting up store path in /home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991 ...
	I1009 20:36:59.516690   70845 main.go:141] libmachine: (newest-cni-203991) Building disk image from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 20:36:59.516740   70845 main.go:141] libmachine: (newest-cni-203991) Downloading /home/jenkins/minikube-integration/19780-9412/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1009 20:36:59.771574   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:36:59.771452   70869 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/id_rsa...
	I1009 20:37:00.072334   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:00.072185   70869 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/newest-cni-203991.rawdisk...
	I1009 20:37:00.072376   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Writing magic tar header
	I1009 20:37:00.072392   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Writing SSH key tar header
	I1009 20:37:00.072405   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:00.072347   70869 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991 ...
	I1009 20:37:00.072521   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991
	I1009 20:37:00.072553   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube/machines
	I1009 20:37:00.072567   70845 main.go:141] libmachine: (newest-cni-203991) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991 (perms=drwx------)
	I1009 20:37:00.072586   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:37:00.072625   70845 main.go:141] libmachine: (newest-cni-203991) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube/machines (perms=drwxr-xr-x)
	I1009 20:37:00.072637   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19780-9412
	I1009 20:37:00.072652   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1009 20:37:00.072662   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Checking permissions on dir: /home/jenkins
	I1009 20:37:00.072673   70845 main.go:141] libmachine: (newest-cni-203991) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412/.minikube (perms=drwxr-xr-x)
	I1009 20:37:00.072689   70845 main.go:141] libmachine: (newest-cni-203991) Setting executable bit set on /home/jenkins/minikube-integration/19780-9412 (perms=drwxrwxr-x)
	I1009 20:37:00.072705   70845 main.go:141] libmachine: (newest-cni-203991) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1009 20:37:00.072714   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Checking permissions on dir: /home
	I1009 20:37:00.072723   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Skipping /home - not owner
	I1009 20:37:00.072734   70845 main.go:141] libmachine: (newest-cni-203991) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1009 20:37:00.072744   70845 main.go:141] libmachine: (newest-cni-203991) Creating domain...
	I1009 20:37:00.073757   70845 main.go:141] libmachine: (newest-cni-203991) define libvirt domain using xml: 
	I1009 20:37:00.073776   70845 main.go:141] libmachine: (newest-cni-203991) <domain type='kvm'>
	I1009 20:37:00.073785   70845 main.go:141] libmachine: (newest-cni-203991)   <name>newest-cni-203991</name>
	I1009 20:37:00.073791   70845 main.go:141] libmachine: (newest-cni-203991)   <memory unit='MiB'>2200</memory>
	I1009 20:37:00.073799   70845 main.go:141] libmachine: (newest-cni-203991)   <vcpu>2</vcpu>
	I1009 20:37:00.073806   70845 main.go:141] libmachine: (newest-cni-203991)   <features>
	I1009 20:37:00.073815   70845 main.go:141] libmachine: (newest-cni-203991)     <acpi/>
	I1009 20:37:00.073867   70845 main.go:141] libmachine: (newest-cni-203991)     <apic/>
	I1009 20:37:00.073880   70845 main.go:141] libmachine: (newest-cni-203991)     <pae/>
	I1009 20:37:00.073891   70845 main.go:141] libmachine: (newest-cni-203991)     
	I1009 20:37:00.073900   70845 main.go:141] libmachine: (newest-cni-203991)   </features>
	I1009 20:37:00.073917   70845 main.go:141] libmachine: (newest-cni-203991)   <cpu mode='host-passthrough'>
	I1009 20:37:00.073926   70845 main.go:141] libmachine: (newest-cni-203991)   
	I1009 20:37:00.073933   70845 main.go:141] libmachine: (newest-cni-203991)   </cpu>
	I1009 20:37:00.073944   70845 main.go:141] libmachine: (newest-cni-203991)   <os>
	I1009 20:37:00.073952   70845 main.go:141] libmachine: (newest-cni-203991)     <type>hvm</type>
	I1009 20:37:00.073984   70845 main.go:141] libmachine: (newest-cni-203991)     <boot dev='cdrom'/>
	I1009 20:37:00.074011   70845 main.go:141] libmachine: (newest-cni-203991)     <boot dev='hd'/>
	I1009 20:37:00.074022   70845 main.go:141] libmachine: (newest-cni-203991)     <bootmenu enable='no'/>
	I1009 20:37:00.074033   70845 main.go:141] libmachine: (newest-cni-203991)   </os>
	I1009 20:37:00.074041   70845 main.go:141] libmachine: (newest-cni-203991)   <devices>
	I1009 20:37:00.074052   70845 main.go:141] libmachine: (newest-cni-203991)     <disk type='file' device='cdrom'>
	I1009 20:37:00.074068   70845 main.go:141] libmachine: (newest-cni-203991)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/boot2docker.iso'/>
	I1009 20:37:00.074078   70845 main.go:141] libmachine: (newest-cni-203991)       <target dev='hdc' bus='scsi'/>
	I1009 20:37:00.074085   70845 main.go:141] libmachine: (newest-cni-203991)       <readonly/>
	I1009 20:37:00.074093   70845 main.go:141] libmachine: (newest-cni-203991)     </disk>
	I1009 20:37:00.074102   70845 main.go:141] libmachine: (newest-cni-203991)     <disk type='file' device='disk'>
	I1009 20:37:00.074125   70845 main.go:141] libmachine: (newest-cni-203991)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1009 20:37:00.074150   70845 main.go:141] libmachine: (newest-cni-203991)       <source file='/home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/newest-cni-203991.rawdisk'/>
	I1009 20:37:00.074169   70845 main.go:141] libmachine: (newest-cni-203991)       <target dev='hda' bus='virtio'/>
	I1009 20:37:00.074186   70845 main.go:141] libmachine: (newest-cni-203991)     </disk>
	I1009 20:37:00.074205   70845 main.go:141] libmachine: (newest-cni-203991)     <interface type='network'>
	I1009 20:37:00.074219   70845 main.go:141] libmachine: (newest-cni-203991)       <source network='mk-newest-cni-203991'/>
	I1009 20:37:00.074234   70845 main.go:141] libmachine: (newest-cni-203991)       <model type='virtio'/>
	I1009 20:37:00.074246   70845 main.go:141] libmachine: (newest-cni-203991)     </interface>
	I1009 20:37:00.074257   70845 main.go:141] libmachine: (newest-cni-203991)     <interface type='network'>
	I1009 20:37:00.074270   70845 main.go:141] libmachine: (newest-cni-203991)       <source network='default'/>
	I1009 20:37:00.074290   70845 main.go:141] libmachine: (newest-cni-203991)       <model type='virtio'/>
	I1009 20:37:00.074308   70845 main.go:141] libmachine: (newest-cni-203991)     </interface>
	I1009 20:37:00.074324   70845 main.go:141] libmachine: (newest-cni-203991)     <serial type='pty'>
	I1009 20:37:00.074336   70845 main.go:141] libmachine: (newest-cni-203991)       <target port='0'/>
	I1009 20:37:00.074347   70845 main.go:141] libmachine: (newest-cni-203991)     </serial>
	I1009 20:37:00.074358   70845 main.go:141] libmachine: (newest-cni-203991)     <console type='pty'>
	I1009 20:37:00.074368   70845 main.go:141] libmachine: (newest-cni-203991)       <target type='serial' port='0'/>
	I1009 20:37:00.074379   70845 main.go:141] libmachine: (newest-cni-203991)     </console>
	I1009 20:37:00.074388   70845 main.go:141] libmachine: (newest-cni-203991)     <rng model='virtio'>
	I1009 20:37:00.074400   70845 main.go:141] libmachine: (newest-cni-203991)       <backend model='random'>/dev/random</backend>
	I1009 20:37:00.074409   70845 main.go:141] libmachine: (newest-cni-203991)     </rng>
	I1009 20:37:00.074415   70845 main.go:141] libmachine: (newest-cni-203991)     
	I1009 20:37:00.074419   70845 main.go:141] libmachine: (newest-cni-203991)     
	I1009 20:37:00.074424   70845 main.go:141] libmachine: (newest-cni-203991)   </devices>
	I1009 20:37:00.074431   70845 main.go:141] libmachine: (newest-cni-203991) </domain>
	I1009 20:37:00.074437   70845 main.go:141] libmachine: (newest-cni-203991) 
	I1009 20:37:00.078791   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:da:67:a1 in network default
	I1009 20:37:00.079484   70845 main.go:141] libmachine: (newest-cni-203991) Ensuring networks are active...
	I1009 20:37:00.079516   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:00.080150   70845 main.go:141] libmachine: (newest-cni-203991) Ensuring network default is active
	I1009 20:37:00.080529   70845 main.go:141] libmachine: (newest-cni-203991) Ensuring network mk-newest-cni-203991 is active
	I1009 20:37:00.081160   70845 main.go:141] libmachine: (newest-cni-203991) Getting domain xml...
	I1009 20:37:00.082012   70845 main.go:141] libmachine: (newest-cni-203991) Creating domain...
	I1009 20:37:01.319438   70845 main.go:141] libmachine: (newest-cni-203991) Waiting to get IP...
	I1009 20:37:01.320314   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:01.320657   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:01.320681   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:01.320623   70869 retry.go:31] will retry after 305.113278ms: waiting for machine to come up
	I1009 20:37:01.626935   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:01.627521   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:01.627547   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:01.627473   70869 retry.go:31] will retry after 276.834608ms: waiting for machine to come up
	I1009 20:37:01.906001   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:01.906403   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:01.906450   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:01.906367   70869 retry.go:31] will retry after 304.205661ms: waiting for machine to come up
	I1009 20:37:02.211695   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:02.212152   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:02.212180   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:02.212120   70869 retry.go:31] will retry after 578.826701ms: waiting for machine to come up
	I1009 20:37:02.792905   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:02.793369   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:02.793409   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:02.793342   70869 retry.go:31] will retry after 735.674018ms: waiting for machine to come up
	I1009 20:37:03.530591   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:03.530993   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:03.531017   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:03.530955   70869 retry.go:31] will retry after 605.418085ms: waiting for machine to come up
	I1009 20:37:04.137834   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:04.138283   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:04.138310   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:04.138230   70869 retry.go:31] will retry after 947.467278ms: waiting for machine to come up
	I1009 20:37:05.086849   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:05.087304   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:05.087329   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:05.087258   70869 retry.go:31] will retry after 1.31436011s: waiting for machine to come up
	I1009 20:37:06.403551   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:06.403990   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:06.404016   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:06.403940   70869 retry.go:31] will retry after 1.740581846s: waiting for machine to come up
	I1009 20:37:08.146927   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:08.147426   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:08.147447   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:08.147384   70869 retry.go:31] will retry after 2.174976964s: waiting for machine to come up
	I1009 20:37:10.324242   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:10.324677   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:10.324704   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:10.324642   70869 retry.go:31] will retry after 2.357510037s: waiting for machine to come up
	I1009 20:37:12.684213   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:12.684685   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:12.684715   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:12.684631   70869 retry.go:31] will retry after 3.579364626s: waiting for machine to come up
	I1009 20:37:16.265323   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:16.265788   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:16.265814   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:16.265758   70869 retry.go:31] will retry after 3.314552187s: waiting for machine to come up
	I1009 20:37:19.581560   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:19.581938   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find current IP address of domain newest-cni-203991 in network mk-newest-cni-203991
	I1009 20:37:19.581966   70845 main.go:141] libmachine: (newest-cni-203991) DBG | I1009 20:37:19.581893   70869 retry.go:31] will retry after 3.647935967s: waiting for machine to come up
	I1009 20:37:23.231039   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.231443   70845 main.go:141] libmachine: (newest-cni-203991) Found IP for machine: 192.168.61.67
	I1009 20:37:23.231472   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has current primary IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.231481   70845 main.go:141] libmachine: (newest-cni-203991) Reserving static IP address...
	I1009 20:37:23.231750   70845 main.go:141] libmachine: (newest-cni-203991) DBG | unable to find host DHCP lease matching {name: "newest-cni-203991", mac: "52:54:00:ac:e1:30", ip: "192.168.61.67"} in network mk-newest-cni-203991
	I1009 20:37:23.306176   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Getting to WaitForSSH function...
	I1009 20:37:23.306212   70845 main.go:141] libmachine: (newest-cni-203991) Reserved static IP address: 192.168.61.67
	I1009 20:37:23.306226   70845 main.go:141] libmachine: (newest-cni-203991) Waiting for SSH to be available...
	I1009 20:37:23.308925   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.309470   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:23.309500   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.309560   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Using SSH client type: external
	I1009 20:37:23.309606   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/id_rsa (-rw-------)
	I1009 20:37:23.309638   70845 main.go:141] libmachine: (newest-cni-203991) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.67 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:37:23.309656   70845 main.go:141] libmachine: (newest-cni-203991) DBG | About to run SSH command:
	I1009 20:37:23.309668   70845 main.go:141] libmachine: (newest-cni-203991) DBG | exit 0
	I1009 20:37:23.435526   70845 main.go:141] libmachine: (newest-cni-203991) DBG | SSH cmd err, output: <nil>: 
	I1009 20:37:23.435804   70845 main.go:141] libmachine: (newest-cni-203991) KVM machine creation complete!
	I1009 20:37:23.436128   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetConfigRaw
	I1009 20:37:23.436787   70845 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:23.436981   70845 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:23.437161   70845 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1009 20:37:23.437177   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetState
	I1009 20:37:23.438332   70845 main.go:141] libmachine: Detecting operating system of created instance...
	I1009 20:37:23.438346   70845 main.go:141] libmachine: Waiting for SSH to be available...
	I1009 20:37:23.438351   70845 main.go:141] libmachine: Getting to WaitForSSH function...
	I1009 20:37:23.438356   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:23.441031   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.441428   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:23.441455   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.441624   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:23.441788   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:23.441951   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:23.442073   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:23.442253   70845 main.go:141] libmachine: Using SSH client type: native
	I1009 20:37:23.442485   70845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.67 22 <nil> <nil>}
	I1009 20:37:23.442500   70845 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1009 20:37:23.546437   70845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:37:23.546468   70845 main.go:141] libmachine: Detecting the provisioner...
	I1009 20:37:23.546479   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:23.549230   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.549577   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:23.549608   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.549804   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:23.550002   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:23.550125   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:23.550252   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:23.550405   70845 main.go:141] libmachine: Using SSH client type: native
	I1009 20:37:23.550572   70845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.67 22 <nil> <nil>}
	I1009 20:37:23.550581   70845 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1009 20:37:23.656074   70845 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1009 20:37:23.656176   70845 main.go:141] libmachine: found compatible host: buildroot
	I1009 20:37:23.656191   70845 main.go:141] libmachine: Provisioning with buildroot...
	I1009 20:37:23.656205   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetMachineName
	I1009 20:37:23.656434   70845 buildroot.go:166] provisioning hostname "newest-cni-203991"
	I1009 20:37:23.656492   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetMachineName
	I1009 20:37:23.656700   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:23.659491   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.659880   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:23.659908   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.660054   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:23.660216   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:23.660359   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:23.660477   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:23.660626   70845 main.go:141] libmachine: Using SSH client type: native
	I1009 20:37:23.660793   70845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.67 22 <nil> <nil>}
	I1009 20:37:23.660804   70845 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-203991 && echo "newest-cni-203991" | sudo tee /etc/hostname
	I1009 20:37:23.782595   70845 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-203991
	
	I1009 20:37:23.782629   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:23.785716   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.786105   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:23.786134   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.786348   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:23.786527   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:23.786696   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:23.786863   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:23.787077   70845 main.go:141] libmachine: Using SSH client type: native
	I1009 20:37:23.787286   70845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.67 22 <nil> <nil>}
	I1009 20:37:23.787304   70845 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-203991' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-203991/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-203991' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:37:23.900371   70845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:37:23.900421   70845 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:37:23.900443   70845 buildroot.go:174] setting up certificates
	I1009 20:37:23.900459   70845 provision.go:84] configureAuth start
	I1009 20:37:23.900474   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetMachineName
	I1009 20:37:23.900707   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetIP
	I1009 20:37:23.903644   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.904037   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:23.904066   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.904185   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:23.906464   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.906792   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:23.906816   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:23.906985   70845 provision.go:143] copyHostCerts
	I1009 20:37:23.907048   70845 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:37:23.907085   70845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:37:23.907165   70845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:37:23.907270   70845 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:37:23.907279   70845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:37:23.907309   70845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:37:23.907373   70845 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:37:23.907380   70845 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:37:23.907401   70845 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:37:23.907459   70845 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.newest-cni-203991 san=[127.0.0.1 192.168.61.67 localhost minikube newest-cni-203991]
	I1009 20:37:24.066119   70845 provision.go:177] copyRemoteCerts
	I1009 20:37:24.066176   70845 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:37:24.066199   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:24.068770   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.069070   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.069099   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.069234   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:24.069424   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:24.069564   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:24.069692   70845 sshutil.go:53] new ssh client: &{IP:192.168.61.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/id_rsa Username:docker}
	I1009 20:37:24.152945   70845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:37:24.179756   70845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:37:24.204566   70845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:37:24.231861   70845 provision.go:87] duration metric: took 331.387618ms to configureAuth
	I1009 20:37:24.231900   70845 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:37:24.232070   70845 config.go:182] Loaded profile config "newest-cni-203991": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:37:24.232170   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:24.234513   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.234860   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.234887   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.235087   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:24.235304   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:24.235524   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:24.235682   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:24.235846   70845 main.go:141] libmachine: Using SSH client type: native
	I1009 20:37:24.235993   70845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.67 22 <nil> <nil>}
	I1009 20:37:24.236007   70845 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:37:24.489080   70845 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:37:24.489116   70845 main.go:141] libmachine: Checking connection to Docker...
	I1009 20:37:24.489127   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetURL
	I1009 20:37:24.490464   70845 main.go:141] libmachine: (newest-cni-203991) DBG | Using libvirt version 6000000
	I1009 20:37:24.492848   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.493219   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.493254   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.493460   70845 main.go:141] libmachine: Docker is up and running!
	I1009 20:37:24.493477   70845 main.go:141] libmachine: Reticulating splines...
	I1009 20:37:24.493484   70845 client.go:171] duration metric: took 25.055353499s to LocalClient.Create
	I1009 20:37:24.493507   70845 start.go:167] duration metric: took 25.055417239s to libmachine.API.Create "newest-cni-203991"
	I1009 20:37:24.493516   70845 start.go:293] postStartSetup for "newest-cni-203991" (driver="kvm2")
	I1009 20:37:24.493528   70845 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:37:24.493551   70845 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:24.493774   70845 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:37:24.493796   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:24.496147   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.496520   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.496549   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.496681   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:24.496859   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:24.497012   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:24.497143   70845 sshutil.go:53] new ssh client: &{IP:192.168.61.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/id_rsa Username:docker}
	I1009 20:37:24.583638   70845 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:37:24.587896   70845 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:37:24.587923   70845 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:37:24.587982   70845 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:37:24.588058   70845 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:37:24.588145   70845 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:37:24.597663   70845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:37:24.621131   70845 start.go:296] duration metric: took 127.601609ms for postStartSetup
	I1009 20:37:24.621191   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetConfigRaw
	I1009 20:37:24.621821   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetIP
	I1009 20:37:24.624457   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.624800   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.624827   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.625059   70845 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/newest-cni-203991/config.json ...
	I1009 20:37:24.625226   70845 start.go:128] duration metric: took 25.206703082s to createHost
	I1009 20:37:24.625246   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:24.627584   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.627873   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.627917   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.628025   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:24.628148   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:24.628288   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:24.628401   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:24.628571   70845 main.go:141] libmachine: Using SSH client type: native
	I1009 20:37:24.628763   70845 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.67 22 <nil> <nil>}
	I1009 20:37:24.628780   70845 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:37:24.735694   70845 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728506244.713245314
	
	I1009 20:37:24.735725   70845 fix.go:216] guest clock: 1728506244.713245314
	I1009 20:37:24.735738   70845 fix.go:229] Guest: 2024-10-09 20:37:24.713245314 +0000 UTC Remote: 2024-10-09 20:37:24.625236434 +0000 UTC m=+25.318352564 (delta=88.00888ms)
	I1009 20:37:24.735779   70845 fix.go:200] guest clock delta is within tolerance: 88.00888ms
	I1009 20:37:24.735786   70845 start.go:83] releasing machines lock for "newest-cni-203991", held for 25.31738214s
	I1009 20:37:24.735815   70845 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:24.736064   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetIP
	I1009 20:37:24.738811   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.739243   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.739276   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.739403   70845 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:24.739921   70845 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:24.740107   70845 main.go:141] libmachine: (newest-cni-203991) Calling .DriverName
	I1009 20:37:24.740160   70845 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:37:24.740203   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:24.740320   70845 ssh_runner.go:195] Run: cat /version.json
	I1009 20:37:24.740339   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHHostname
	I1009 20:37:24.742822   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.743082   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.743169   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.743215   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.743320   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:24.743423   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:24.743458   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:24.743551   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:24.743579   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHPort
	I1009 20:37:24.743696   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:24.743755   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHKeyPath
	I1009 20:37:24.743821   70845 sshutil.go:53] new ssh client: &{IP:192.168.61.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/id_rsa Username:docker}
	I1009 20:37:24.743869   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetSSHUsername
	I1009 20:37:24.743980   70845 sshutil.go:53] new ssh client: &{IP:192.168.61.67 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/newest-cni-203991/id_rsa Username:docker}
	I1009 20:37:24.846634   70845 ssh_runner.go:195] Run: systemctl --version
	I1009 20:37:24.852768   70845 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:37:25.011985   70845 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:37:25.017968   70845 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:37:25.018038   70845 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:37:25.034283   70845 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:37:25.034306   70845 start.go:495] detecting cgroup driver to use...
	I1009 20:37:25.034354   70845 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:37:25.049375   70845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:37:25.064833   70845 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:37:25.064887   70845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:37:25.079802   70845 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:37:25.093695   70845 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:37:25.213370   70845 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:37:25.371993   70845 docker.go:233] disabling docker service ...
	I1009 20:37:25.372065   70845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:37:25.388115   70845 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:37:25.400727   70845 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:37:25.532193   70845 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:37:25.665552   70845 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:37:25.681547   70845 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:37:25.701419   70845 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:37:25.701479   70845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:37:25.713686   70845 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:37:25.713750   70845 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:37:25.724637   70845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:37:25.735431   70845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:37:25.745989   70845 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:37:25.758128   70845 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:37:25.768449   70845 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:37:25.785865   70845 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:37:25.797866   70845 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:37:25.807250   70845 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:37:25.807294   70845 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:37:25.820779   70845 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:37:25.831712   70845 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:37:25.962005   70845 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:37:26.052584   70845 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:37:26.052664   70845 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:37:26.057386   70845 start.go:563] Will wait 60s for crictl version
	I1009 20:37:26.057458   70845 ssh_runner.go:195] Run: which crictl
	I1009 20:37:26.060985   70845 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:37:26.100874   70845 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:37:26.100975   70845 ssh_runner.go:195] Run: crio --version
	I1009 20:37:26.129370   70845 ssh_runner.go:195] Run: crio --version
	I1009 20:37:26.163159   70845 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:37:26.164500   70845 main.go:141] libmachine: (newest-cni-203991) Calling .GetIP
	I1009 20:37:26.166792   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:26.167202   70845 main.go:141] libmachine: (newest-cni-203991) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ac:e1:30", ip: ""} in network mk-newest-cni-203991: {Iface:virbr3 ExpiryTime:2024-10-09 21:37:14 +0000 UTC Type:0 Mac:52:54:00:ac:e1:30 Iaid: IPaddr:192.168.61.67 Prefix:24 Hostname:newest-cni-203991 Clientid:01:52:54:00:ac:e1:30}
	I1009 20:37:26.167239   70845 main.go:141] libmachine: (newest-cni-203991) DBG | domain newest-cni-203991 has defined IP address 192.168.61.67 and MAC address 52:54:00:ac:e1:30 in network mk-newest-cni-203991
	I1009 20:37:26.167487   70845 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:37:26.171571   70845 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:37:26.185619   70845 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1009 20:37:26.186865   70845 kubeadm.go:883] updating cluster {Name:newest-cni-203991 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-203991 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.67 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:37:26.186989   70845 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:37:26.187052   70845 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:37:26.218420   70845 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:37:26.218481   70845 ssh_runner.go:195] Run: which lz4
	I1009 20:37:26.222439   70845 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:37:26.226584   70845 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:37:26.226610   70845 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:37:27.569137   70845 crio.go:462] duration metric: took 1.346722323s to copy over tarball
	I1009 20:37:27.569219   70845 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.669927080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506253669902883,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9a035f6-77a7-4dd3-8b46-729023f0389e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.670586201Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68c77d43-c42a-4c05-bed2-5b2778914412 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.670687384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68c77d43-c42a-4c05-bed2-5b2778914412 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.671030079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505130324851517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e8775,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54de70fedf7d5dd6b6eee66a7c7202866a6040e4e72e115c0857ceb44a00274f,PodSandboxId:02ea45abe18098c3f4a94231fded474c9df9c3822dcb8b405e18885e7848a6a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728505110248694331,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d8238b-b1d8-4770-90d6-27087a4a95b5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d,PodSandboxId:17ecff4f59d2daa859fd9ce97431fe783b9619aef8084cfd6b03a255674e2e82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505107151340068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dddm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284ba3c4-0972-40f4-97d9-6ed9ce09feac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915,PodSandboxId:0c1182fc5dd45f75643b6184a2e1398f0c3767ffc8065525238f8326f5d0d59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728505099488615575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vbpbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf61f4e-0d31-4712-9d
3e-7baa113b31d9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728505099459572166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e87
75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da,PodSandboxId:88c3e467b83bfc51f846e59c0cc61f7784c4806e4ae4323f9a757d5670bc5b82,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505094827040214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6630093b442a7ab96b2fbc070f5e39b1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2,PodSandboxId:2a1fe5cfb209d88a409e63634ae3cecf3c024c1095e0720a1f1ec10f0627132c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505094855362794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f67b234fae8b4cd32029941a4f99b6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4,PodSandboxId:2eac684995236ac5d70c8dff22d7065540b443855653142ad0fee9555e195369,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505094786042459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adad3fbe07057bd70072eae79db81b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783,PodSandboxId:c8613ecf9b51b18f633cf29d1a4a3869e2c8cb5c901f99e653adfb63740fa9c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505094737379329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed7d412dd3223d26543c0d27afdd758,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68c77d43-c42a-4c05-bed2-5b2778914412 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.709061043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=572f0cfa-49f8-4972-abf6-3dbb40da84f8 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.709134932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=572f0cfa-49f8-4972-abf6-3dbb40da84f8 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.710076763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5a39510-2766-4efb-9431-3880909367b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.710517500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506253710455893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5a39510-2766-4efb-9431-3880909367b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.711033416Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66dcd388-435f-4421-8164-2b84b75a5b15 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.711254275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66dcd388-435f-4421-8164-2b84b75a5b15 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.712025548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505130324851517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e8775,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54de70fedf7d5dd6b6eee66a7c7202866a6040e4e72e115c0857ceb44a00274f,PodSandboxId:02ea45abe18098c3f4a94231fded474c9df9c3822dcb8b405e18885e7848a6a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728505110248694331,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d8238b-b1d8-4770-90d6-27087a4a95b5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d,PodSandboxId:17ecff4f59d2daa859fd9ce97431fe783b9619aef8084cfd6b03a255674e2e82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505107151340068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dddm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284ba3c4-0972-40f4-97d9-6ed9ce09feac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915,PodSandboxId:0c1182fc5dd45f75643b6184a2e1398f0c3767ffc8065525238f8326f5d0d59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728505099488615575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vbpbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf61f4e-0d31-4712-9d
3e-7baa113b31d9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728505099459572166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e87
75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da,PodSandboxId:88c3e467b83bfc51f846e59c0cc61f7784c4806e4ae4323f9a757d5670bc5b82,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505094827040214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6630093b442a7ab96b2fbc070f5e39b1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2,PodSandboxId:2a1fe5cfb209d88a409e63634ae3cecf3c024c1095e0720a1f1ec10f0627132c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505094855362794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f67b234fae8b4cd32029941a4f99b6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4,PodSandboxId:2eac684995236ac5d70c8dff22d7065540b443855653142ad0fee9555e195369,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505094786042459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adad3fbe07057bd70072eae79db81b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783,PodSandboxId:c8613ecf9b51b18f633cf29d1a4a3869e2c8cb5c901f99e653adfb63740fa9c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505094737379329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed7d412dd3223d26543c0d27afdd758,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=66dcd388-435f-4421-8164-2b84b75a5b15 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.752519910Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2aba7d48-f803-4838-9197-e2552499d0ff name=/runtime.v1.RuntimeService/Version
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.752616946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2aba7d48-f803-4838-9197-e2552499d0ff name=/runtime.v1.RuntimeService/Version
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.753698291Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94a9ce1f-060d-48d5-94ec-f88c0c15893c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.754062732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506253754042304,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94a9ce1f-060d-48d5-94ec-f88c0c15893c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.754781173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40246023-324d-4b9d-8da2-29b68a32fb4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.754849799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40246023-324d-4b9d-8da2-29b68a32fb4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.755050573Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505130324851517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e8775,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54de70fedf7d5dd6b6eee66a7c7202866a6040e4e72e115c0857ceb44a00274f,PodSandboxId:02ea45abe18098c3f4a94231fded474c9df9c3822dcb8b405e18885e7848a6a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728505110248694331,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d8238b-b1d8-4770-90d6-27087a4a95b5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d,PodSandboxId:17ecff4f59d2daa859fd9ce97431fe783b9619aef8084cfd6b03a255674e2e82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505107151340068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dddm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284ba3c4-0972-40f4-97d9-6ed9ce09feac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915,PodSandboxId:0c1182fc5dd45f75643b6184a2e1398f0c3767ffc8065525238f8326f5d0d59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728505099488615575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vbpbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf61f4e-0d31-4712-9d
3e-7baa113b31d9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728505099459572166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e87
75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da,PodSandboxId:88c3e467b83bfc51f846e59c0cc61f7784c4806e4ae4323f9a757d5670bc5b82,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505094827040214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6630093b442a7ab96b2fbc070f5e39b1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2,PodSandboxId:2a1fe5cfb209d88a409e63634ae3cecf3c024c1095e0720a1f1ec10f0627132c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505094855362794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f67b234fae8b4cd32029941a4f99b6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4,PodSandboxId:2eac684995236ac5d70c8dff22d7065540b443855653142ad0fee9555e195369,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505094786042459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adad3fbe07057bd70072eae79db81b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783,PodSandboxId:c8613ecf9b51b18f633cf29d1a4a3869e2c8cb5c901f99e653adfb63740fa9c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505094737379329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed7d412dd3223d26543c0d27afdd758,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40246023-324d-4b9d-8da2-29b68a32fb4a name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.793044697Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7fc26cd5-3b24-4808-89bf-b84bac2dab66 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.793137016Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7fc26cd5-3b24-4808-89bf-b84bac2dab66 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.794459596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9409db2-e0da-4341-abd9-51eafa001c30 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.794865302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506253794843828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9409db2-e0da-4341-abd9-51eafa001c30 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.795417186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d003351b-9f44-4dfb-84ba-28cea1fc8c35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.795489611Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d003351b-9f44-4dfb-84ba-28cea1fc8c35 name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:37:33 no-preload-480205 crio[701]: time="2024-10-09 20:37:33.796684035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728505130324851517,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e8775,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54de70fedf7d5dd6b6eee66a7c7202866a6040e4e72e115c0857ceb44a00274f,PodSandboxId:02ea45abe18098c3f4a94231fded474c9df9c3822dcb8b405e18885e7848a6a5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728505110248694331,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2d8238b-b1d8-4770-90d6-27087a4a95b5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d,PodSandboxId:17ecff4f59d2daa859fd9ce97431fe783b9619aef8084cfd6b03a255674e2e82,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728505107151340068,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dddm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 284ba3c4-0972-40f4-97d9-6ed9ce09feac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915,PodSandboxId:0c1182fc5dd45f75643b6184a2e1398f0c3767ffc8065525238f8326f5d0d59c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728505099488615575,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vbpbk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acf61f4e-0d31-4712-9d
3e-7baa113b31d9,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c,PodSandboxId:2e806445f254dcc50e3e0271b8a2f6e481f895a26fe0047b98c4e951c74a4c0d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728505099459572166,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d88d60b3-7360-4111-b680-e9e2a38e87
75,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da,PodSandboxId:88c3e467b83bfc51f846e59c0cc61f7784c4806e4ae4323f9a757d5670bc5b82,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728505094827040214,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6630093b442a7ab96b2fbc070f5e39b1,},Annotations:map[string]string{io.kuber
netes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2,PodSandboxId:2a1fe5cfb209d88a409e63634ae3cecf3c024c1095e0720a1f1ec10f0627132c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728505094855362794,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0f67b234fae8b4cd32029941a4f99b6,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4,PodSandboxId:2eac684995236ac5d70c8dff22d7065540b443855653142ad0fee9555e195369,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728505094786042459,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2adad3fbe07057bd70072eae79db81b4,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2
713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783,PodSandboxId:c8613ecf9b51b18f633cf29d1a4a3869e2c8cb5c901f99e653adfb63740fa9c3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728505094737379329,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-480205,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aed7d412dd3223d26543c0d27afdd758,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d003351b-9f44-4dfb-84ba-28cea1fc8c35 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a672e8a67e92b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       2                   2e806445f254d       storage-provisioner
	54de70fedf7d5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   02ea45abe1809       busybox
	3f0da5a79567c       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      19 minutes ago      Running             coredns                   1                   17ecff4f59d2d       coredns-7c65d6cfc9-dddm2
	355de783599f2       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      19 minutes ago      Running             kube-proxy                1                   0c1182fc5dd45       kube-proxy-vbpbk
	8a3298f9f8701       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   2e806445f254d       storage-provisioner
	c6154b0051dbc       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      19 minutes ago      Running             kube-scheduler            1                   2a1fe5cfb209d       kube-scheduler-no-preload-480205
	9c72eddc31372       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   88c3e467b83bf       etcd-no-preload-480205
	42cddfd08cd98       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      19 minutes ago      Running             kube-apiserver            1                   2eac684995236       kube-apiserver-no-preload-480205
	71cf38b8d4096       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      19 minutes ago      Running             kube-controller-manager   1                   c8613ecf9b51b       kube-controller-manager-no-preload-480205
	
	
	==> coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56065 - 64301 "HINFO IN 4263640063345838452.4491728043591611086. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014689769s
	
	
	==> describe nodes <==
	Name:               no-preload-480205
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-480205
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=no-preload-480205
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T20_08_58_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 20:08:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-480205
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 20:37:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 20:34:07 +0000   Wed, 09 Oct 2024 20:08:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 20:34:07 +0000   Wed, 09 Oct 2024 20:08:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 20:34:07 +0000   Wed, 09 Oct 2024 20:08:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 20:34:07 +0000   Wed, 09 Oct 2024 20:18:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.162
	  Hostname:    no-preload-480205
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0a2a815b36f34b10b1151cb9dfac50a7
	  System UUID:                0a2a815b-36f3-4b10-b115-1cb9dfac50a7
	  Boot ID:                    396a835f-b5b1-42f2-a666-2021b9d852ec
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-dddm2                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-480205                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-480205             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-no-preload-480205    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-vbpbk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-480205             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-fhcfl              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node no-preload-480205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node no-preload-480205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-480205 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-480205 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-480205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-480205 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeReady                28m                kubelet          Node no-preload-480205 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-480205 event: Registered Node no-preload-480205 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-480205 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-480205 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-480205 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-480205 event: Registered Node no-preload-480205 in Controller
	
	
	==> dmesg <==
	[Oct 9 20:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053569] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042080] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.203410] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.574137] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.593628] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.204617] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.064043] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080670] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.188662] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.114767] systemd-fstab-generator[664]: Ignoring "noauto" option for root device
	[  +0.273346] systemd-fstab-generator[693]: Ignoring "noauto" option for root device
	[Oct 9 20:18] systemd-fstab-generator[1229]: Ignoring "noauto" option for root device
	[  +0.063207] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.818456] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +5.297709] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.315153] systemd-fstab-generator[1974]: Ignoring "noauto" option for root device
	[  +3.724427] kauditd_printk_skb: 61 callbacks suppressed
	[  +5.142460] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] <==
	{"level":"info","ts":"2024-10-09T20:18:17.019662Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"95e2e907d4f1ad16","local-member-attributes":"{Name:no-preload-480205 ClientURLs:[https://192.168.39.162:2379]}","request-path":"/0/members/95e2e907d4f1ad16/attributes","cluster-id":"da8895e0fc3a6493","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-09T20:18:17.019688Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:18:17.019964Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-09T20:18:17.019992Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-09T20:18:17.019669Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-09T20:18:17.021010Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:18:17.021072Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T20:18:17.021995Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T20:18:17.022043Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.162:2379"}
	{"level":"info","ts":"2024-10-09T20:28:17.053843Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":860}
	{"level":"info","ts":"2024-10-09T20:28:17.065072Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":860,"took":"10.791545ms","hash":4032950074,"current-db-size-bytes":2895872,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2895872,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-10-09T20:28:17.065116Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4032950074,"revision":860,"compact-revision":-1}
	{"level":"info","ts":"2024-10-09T20:33:17.061512Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1102}
	{"level":"info","ts":"2024-10-09T20:33:17.069371Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1102,"took":"6.487845ms","hash":2069478654,"current-db-size-bytes":2895872,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1724416,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-10-09T20:33:17.069443Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2069478654,"revision":1102,"compact-revision":860}
	{"level":"info","ts":"2024-10-09T20:37:11.063433Z","caller":"traceutil/trace.go:171","msg":"trace[485278692] linearizableReadLoop","detail":"{readStateIndex:1799; appliedIndex:1798; }","duration":"174.417505ms","start":"2024-10-09T20:37:10.888961Z","end":"2024-10-09T20:37:11.063379Z","steps":["trace[485278692] 'read index received'  (duration: 174.252215ms)","trace[485278692] 'applied index is now lower than readState.Index'  (duration: 164.67µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:37:11.063808Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.739159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T20:37:11.063871Z","caller":"traceutil/trace.go:171","msg":"trace[40468639] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1535; }","duration":"174.922941ms","start":"2024-10-09T20:37:10.888934Z","end":"2024-10-09T20:37:11.063857Z","steps":["trace[40468639] 'agreement among raft nodes before linearized reading'  (duration: 174.68949ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T20:37:11.064081Z","caller":"traceutil/trace.go:171","msg":"trace[185574302] transaction","detail":"{read_only:false; response_revision:1535; number_of_response:1; }","duration":"272.509275ms","start":"2024-10-09T20:37:10.791504Z","end":"2024-10-09T20:37:11.064013Z","steps":["trace[185574302] 'process raft request'  (duration: 271.719886ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T20:37:31.449741Z","caller":"traceutil/trace.go:171","msg":"trace[1306630533] transaction","detail":"{read_only:false; response_revision:1551; number_of_response:1; }","duration":"117.366248ms","start":"2024-10-09T20:37:31.332351Z","end":"2024-10-09T20:37:31.449718Z","steps":["trace[1306630533] 'process raft request'  (duration: 117.270073ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:37:31.829913Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"243.037237ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T20:37:31.829996Z","caller":"traceutil/trace.go:171","msg":"trace[785786558] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1551; }","duration":"243.155173ms","start":"2024-10-09T20:37:31.586813Z","end":"2024-10-09T20:37:31.829968Z","steps":["trace[785786558] 'range keys from in-memory index tree'  (duration: 243.020591ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T20:37:31.831409Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.363829ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12472317240376888123 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-480205\" mod_revision:1543 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-480205\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-480205\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-09T20:37:31.831542Z","caller":"traceutil/trace.go:171","msg":"trace[296985345] transaction","detail":"{read_only:false; response_revision:1552; number_of_response:1; }","duration":"379.969395ms","start":"2024-10-09T20:37:31.451563Z","end":"2024-10-09T20:37:31.831532Z","steps":["trace[296985345] 'process raft request'  (duration: 121.410987ms)","trace[296985345] 'compare'  (duration: 257.237326ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T20:37:31.831629Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-09T20:37:31.451544Z","time spent":"380.044914ms","remote":"127.0.0.1:55738","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":557,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/no-preload-480205\" mod_revision:1543 > success:<request_put:<key:\"/registry/leases/kube-node-lease/no-preload-480205\" value_size:499 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/no-preload-480205\" > >"}
	
	
	==> kernel <==
	 20:37:34 up 19 min,  0 users,  load average: 0.01, 0.08, 0.12
	Linux no-preload-480205 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] <==
	E1009 20:33:19.298582       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1009 20:33:19.298429       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 20:33:19.299889       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:33:19.299970       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:34:19.300512       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:34:19.300615       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1009 20:34:19.300757       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:34:19.300804       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1009 20:34:19.302061       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:34:19.302121       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1009 20:36:19.303264       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:36:19.303370       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1009 20:36:19.303264       1 handler_proxy.go:99] no RequestInfo found in the context
	E1009 20:36:19.303462       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1009 20:36:19.304802       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1009 20:36:19.304845       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] <==
	E1009 20:32:22.029054       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:32:22.534628       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:32:52.035415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:32:52.541597       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:33:22.042040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:33:22.549569       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:33:52.048876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:33:52.556826       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:34:07.488489       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-480205"
	E1009 20:34:22.057006       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:34:22.564861       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:34:41.137411       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="292.744µs"
	E1009 20:34:52.063646       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:34:52.574018       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1009 20:34:54.140546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="166.076µs"
	E1009 20:35:22.069873       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:35:22.582623       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:35:52.076717       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:35:52.590461       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:36:22.082759       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:36:22.599455       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:36:52.088697       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:36:52.607107       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1009 20:37:22.095224       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1009 20:37:22.615319       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1009 20:18:19.729939       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1009 20:18:19.740129       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.162"]
	E1009 20:18:19.741593       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 20:18:19.813360       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1009 20:18:19.813400       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1009 20:18:19.813428       1 server_linux.go:169] "Using iptables Proxier"
	I1009 20:18:19.820550       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 20:18:19.821534       1 server.go:483] "Version info" version="v1.31.1"
	I1009 20:18:19.821798       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:19.823648       1 config.go:199] "Starting service config controller"
	I1009 20:18:19.823750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 20:18:19.823854       1 config.go:105] "Starting endpoint slice config controller"
	I1009 20:18:19.823876       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 20:18:19.824846       1 config.go:328] "Starting node config controller"
	I1009 20:18:19.824885       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 20:18:19.924383       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1009 20:18:19.924493       1 shared_informer.go:320] Caches are synced for service config
	I1009 20:18:19.925097       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] <==
	I1009 20:18:15.873122       1 serving.go:386] Generated self-signed cert in-memory
	W1009 20:18:18.259653       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1009 20:18:18.260020       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 20:18:18.260077       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1009 20:18:18.260103       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1009 20:18:18.288922       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1009 20:18:18.288999       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 20:18:18.291328       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1009 20:18:18.294218       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 20:18:18.294921       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1009 20:18:18.295650       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1009 20:18:18.394898       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 20:36:24 no-preload-480205 kubelet[1355]: E1009 20:36:24.330458    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506184330093559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:36:26 no-preload-480205 kubelet[1355]: E1009 20:36:26.120454    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:36:34 no-preload-480205 kubelet[1355]: E1009 20:36:34.331794    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506194331525669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:36:34 no-preload-480205 kubelet[1355]: E1009 20:36:34.331844    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506194331525669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:36:40 no-preload-480205 kubelet[1355]: E1009 20:36:40.121248    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:36:44 no-preload-480205 kubelet[1355]: E1009 20:36:44.334513    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506204333892590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:36:44 no-preload-480205 kubelet[1355]: E1009 20:36:44.335097    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506204333892590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:36:54 no-preload-480205 kubelet[1355]: E1009 20:36:54.120378    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:36:54 no-preload-480205 kubelet[1355]: E1009 20:36:54.337353    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506214336898181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:36:54 no-preload-480205 kubelet[1355]: E1009 20:36:54.337404    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506214336898181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:04 no-preload-480205 kubelet[1355]: E1009 20:37:04.339402    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506224338911744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:04 no-preload-480205 kubelet[1355]: E1009 20:37:04.339681    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506224338911744,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:08 no-preload-480205 kubelet[1355]: E1009 20:37:08.121888    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:37:14 no-preload-480205 kubelet[1355]: E1009 20:37:14.152614    1355 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 09 20:37:14 no-preload-480205 kubelet[1355]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 09 20:37:14 no-preload-480205 kubelet[1355]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 09 20:37:14 no-preload-480205 kubelet[1355]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 09 20:37:14 no-preload-480205 kubelet[1355]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 09 20:37:14 no-preload-480205 kubelet[1355]: E1009 20:37:14.342273    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506234341627177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:14 no-preload-480205 kubelet[1355]: E1009 20:37:14.342322    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506234341627177,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:23 no-preload-480205 kubelet[1355]: E1009 20:37:23.120617    1355 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-fhcfl" podUID="5c70178a-2be8-4006-b78b-5c4d45091004"
	Oct 09 20:37:24 no-preload-480205 kubelet[1355]: E1009 20:37:24.344776    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506244344278980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:24 no-preload-480205 kubelet[1355]: E1009 20:37:24.345396    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506244344278980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:34 no-preload-480205 kubelet[1355]: E1009 20:37:34.347624    1355 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506254347294192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 20:37:34 no-preload-480205 kubelet[1355]: E1009 20:37:34.347650    1355 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506254347294192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] <==
	I1009 20:18:19.601675       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1009 20:18:49.606924       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] <==
	I1009 20:18:50.408919       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 20:18:50.417291       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 20:18:50.417412       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 20:19:07.819420       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 20:19:07.819813       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-480205_971a7ee3-29ba-41f6-a843-cee29f839171!
	I1009 20:19:07.820046       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad750259-34f8-489e-aa79-f6194ad4f0c3", APIVersion:"v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-480205_971a7ee3-29ba-41f6-a843-cee29f839171 became leader
	I1009 20:19:07.920435       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-480205_971a7ee3-29ba-41f6-a843-cee29f839171!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-480205 -n no-preload-480205
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-480205 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-fhcfl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-480205 describe pod metrics-server-6867b74b74-fhcfl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-480205 describe pod metrics-server-6867b74b74-fhcfl: exit status 1 (60.330224ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-fhcfl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-480205 describe pod metrics-server-6867b74b74-fhcfl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (347.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1009 20:34:51.613509   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
E1009 20:34:51.908815   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.119:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.119:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 2 (235.877345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-169021" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-169021 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-169021 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.103µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-169021 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 2 (223.312799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-169021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-169021 logs -n 25: (1.463759975s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-790037                           | kubernetes-upgrade-790037    | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:07 UTC |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:07 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| ssh     | -p NoKubernetes-615869 sudo                            | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-615869                                 | NoKubernetes-615869          | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:08 UTC |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:08 UTC | 09 Oct 24 20:09 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-480205             | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-261596                              | cert-expiration-261596       | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	| delete  | -p                                                     | disable-driver-mounts-324052 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:09 UTC |
	|         | disable-driver-mounts-324052                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:09 UTC | 09 Oct 24 20:10 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-503330            | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC | 09 Oct 24 20:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:10 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-733270  | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:11 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-480205                  | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-169021        | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| start   | -p no-preload-480205                                   | no-preload-480205            | jenkins | v1.34.0 | 09 Oct 24 20:11 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-503330                 | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-503330                                  | embed-certs-503330           | jenkins | v1.34.0 | 09 Oct 24 20:12 UTC | 09 Oct 24 20:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-733270       | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-733270 | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:22 UTC |
	|         | default-k8s-diff-port-733270                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-169021             | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC | 09 Oct 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-169021                              | old-k8s-version-169021       | jenkins | v1.34.0 | 09 Oct 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 20:13:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 20:13:44.614940   64287 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:13:44.615052   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615076   64287 out.go:358] Setting ErrFile to fd 2...
	I1009 20:13:44.615081   64287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:13:44.615239   64287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:13:44.615728   64287 out.go:352] Setting JSON to false
	I1009 20:13:44.616598   64287 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6966,"bootTime":1728497859,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:13:44.616678   64287 start.go:139] virtualization: kvm guest
	I1009 20:13:44.618709   64287 out.go:177] * [old-k8s-version-169021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:13:44.619813   64287 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:13:44.619841   64287 notify.go:220] Checking for updates...
	I1009 20:13:44.621876   64287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:13:44.623226   64287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:13:44.624576   64287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:13:44.625863   64287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:13:44.627027   64287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:13:44.628559   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:13:44.628948   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.629014   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.644138   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35633
	I1009 20:13:44.644537   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.645045   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.645067   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.645380   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.645557   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.647115   64287 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1009 20:13:44.648228   64287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:13:44.648491   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:13:44.648529   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:13:44.663211   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46243
	I1009 20:13:44.663674   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:13:44.664164   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:13:44.664192   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:13:44.664482   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:13:44.664648   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:13:44.697395   64287 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 20:13:44.698580   64287 start.go:297] selected driver: kvm2
	I1009 20:13:44.698591   64287 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.698719   64287 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:13:44.699437   64287 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.699521   64287 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 20:13:44.713190   64287 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 20:13:44.713567   64287 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:13:44.713600   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:13:44.713640   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:13:44.713673   64287 start.go:340] cluster config:
	{Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:13:44.713805   64287 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 20:13:44.716209   64287 out.go:177] * Starting "old-k8s-version-169021" primary control-plane node in "old-k8s-version-169021" cluster
	I1009 20:13:44.717364   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:13:44.717399   64287 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 20:13:44.717409   64287 cache.go:56] Caching tarball of preloaded images
	I1009 20:13:44.717485   64287 preload.go:172] Found /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 20:13:44.717495   64287 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 20:13:44.717594   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:13:44.717753   64287 start.go:360] acquireMachinesLock for old-k8s-version-169021: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:13:48.943307   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:52.015296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:13:58.095330   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:01.167322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:07.247325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:10.323296   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:16.399318   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:19.471371   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:25.551279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:28.623322   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:34.703301   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:37.775281   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:43.855344   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:46.927300   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:53.007389   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:14:56.079332   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:02.159290   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:05.231351   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:11.311339   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:14.383289   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:20.463287   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:23.535402   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:29.615312   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:32.687319   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:38.767323   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:41.839306   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:47.919325   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:50.991292   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:15:57.071390   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:00.143404   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:06.223291   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:09.295298   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:15.375349   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:18.447271   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:24.527327   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:27.599279   63427 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.162:22: connect: no route to host
	I1009 20:16:30.604005   63744 start.go:364] duration metric: took 3m52.142985964s to acquireMachinesLock for "embed-certs-503330"
	I1009 20:16:30.604068   63744 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:30.604076   63744 fix.go:54] fixHost starting: 
	I1009 20:16:30.604520   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:30.604571   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:30.620743   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38605
	I1009 20:16:30.621433   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:30.621936   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:16:30.621961   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:30.622323   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:30.622490   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:30.622654   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:16:30.624257   63744 fix.go:112] recreateIfNeeded on embed-certs-503330: state=Stopped err=<nil>
	I1009 20:16:30.624295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	W1009 20:16:30.624542   63744 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:30.627103   63744 out.go:177] * Restarting existing kvm2 VM for "embed-certs-503330" ...
	I1009 20:16:30.601719   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:30.601759   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602048   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:16:30.602078   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:16:30.602263   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:16:30.603862   63427 machine.go:96] duration metric: took 4m37.428982059s to provisionDockerMachine
	I1009 20:16:30.603905   63427 fix.go:56] duration metric: took 4m37.449834405s for fixHost
	I1009 20:16:30.603915   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 4m37.449856097s
	W1009 20:16:30.603942   63427 start.go:714] error starting host: provision: host is not running
	W1009 20:16:30.604043   63427 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1009 20:16:30.604052   63427 start.go:729] Will try again in 5 seconds ...
	I1009 20:16:30.628558   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Start
	I1009 20:16:30.628718   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring networks are active...
	I1009 20:16:30.629440   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network default is active
	I1009 20:16:30.629760   63744 main.go:141] libmachine: (embed-certs-503330) Ensuring network mk-embed-certs-503330 is active
	I1009 20:16:30.630197   63744 main.go:141] libmachine: (embed-certs-503330) Getting domain xml...
	I1009 20:16:30.630952   63744 main.go:141] libmachine: (embed-certs-503330) Creating domain...
	I1009 20:16:31.808982   63744 main.go:141] libmachine: (embed-certs-503330) Waiting to get IP...
	I1009 20:16:31.809856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:31.810317   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:31.810463   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:31.810307   64895 retry.go:31] will retry after 287.246953ms: waiting for machine to come up
	I1009 20:16:32.098815   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.099474   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.099513   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.099422   64895 retry.go:31] will retry after 323.155152ms: waiting for machine to come up
	I1009 20:16:32.424145   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.424618   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.424646   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.424576   64895 retry.go:31] will retry after 410.947245ms: waiting for machine to come up
	I1009 20:16:32.837351   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:32.837773   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:32.837823   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:32.837735   64895 retry.go:31] will retry after 562.56411ms: waiting for machine to come up
	I1009 20:16:35.605597   63427 start.go:360] acquireMachinesLock for no-preload-480205: {Name:mk7c51cea3c0464dd00739b2e44e496b23ddd39b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1009 20:16:33.401377   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.401828   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.401877   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.401781   64895 retry.go:31] will retry after 460.104327ms: waiting for machine to come up
	I1009 20:16:33.863457   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:33.863854   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:33.863880   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:33.863815   64895 retry.go:31] will retry after 668.516186ms: waiting for machine to come up
	I1009 20:16:34.533619   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:34.534019   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:34.534054   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:34.533954   64895 retry.go:31] will retry after 966.757544ms: waiting for machine to come up
	I1009 20:16:35.501805   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:35.502178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:35.502200   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:35.502137   64895 retry.go:31] will retry after 1.017669155s: waiting for machine to come up
	I1009 20:16:36.521729   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:36.522150   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:36.522178   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:36.522115   64895 retry.go:31] will retry after 1.292799206s: waiting for machine to come up
	I1009 20:16:37.816782   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:37.817187   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:37.817207   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:37.817156   64895 retry.go:31] will retry after 2.202935241s: waiting for machine to come up
	I1009 20:16:40.022666   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:40.023072   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:40.023101   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:40.023030   64895 retry.go:31] will retry after 2.360885318s: waiting for machine to come up
	I1009 20:16:42.385530   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:42.385947   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:42.385976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:42.385909   64895 retry.go:31] will retry after 2.1999082s: waiting for machine to come up
	I1009 20:16:44.588258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:44.588617   63744 main.go:141] libmachine: (embed-certs-503330) DBG | unable to find current IP address of domain embed-certs-503330 in network mk-embed-certs-503330
	I1009 20:16:44.588649   63744 main.go:141] libmachine: (embed-certs-503330) DBG | I1009 20:16:44.588581   64895 retry.go:31] will retry after 3.345984614s: waiting for machine to come up
	I1009 20:16:47.937287   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937758   63744 main.go:141] libmachine: (embed-certs-503330) Found IP for machine: 192.168.50.97
	I1009 20:16:47.937785   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has current primary IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.937790   63744 main.go:141] libmachine: (embed-certs-503330) Reserving static IP address...
	I1009 20:16:47.938195   63744 main.go:141] libmachine: (embed-certs-503330) Reserved static IP address: 192.168.50.97
	I1009 20:16:47.938231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.938241   63744 main.go:141] libmachine: (embed-certs-503330) Waiting for SSH to be available...
	I1009 20:16:47.938266   63744 main.go:141] libmachine: (embed-certs-503330) DBG | skip adding static IP to network mk-embed-certs-503330 - found existing host DHCP lease matching {name: "embed-certs-503330", mac: "52:54:00:20:23:dc", ip: "192.168.50.97"}
	I1009 20:16:47.938279   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Getting to WaitForSSH function...
	I1009 20:16:47.940214   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940468   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:47.940499   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:47.940570   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH client type: external
	I1009 20:16:47.940605   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa (-rw-------)
	I1009 20:16:47.940639   63744 main.go:141] libmachine: (embed-certs-503330) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.97 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:16:47.940654   63744 main.go:141] libmachine: (embed-certs-503330) DBG | About to run SSH command:
	I1009 20:16:47.940660   63744 main.go:141] libmachine: (embed-certs-503330) DBG | exit 0
	I1009 20:16:48.066973   63744 main.go:141] libmachine: (embed-certs-503330) DBG | SSH cmd err, output: <nil>: 
	I1009 20:16:48.067404   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetConfigRaw
	I1009 20:16:48.068009   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.070587   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.070969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.070998   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.071241   63744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/config.json ...
	I1009 20:16:48.071426   63744 machine.go:93] provisionDockerMachine start ...
	I1009 20:16:48.071443   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:48.071655   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.074102   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.074448   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.074560   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.074721   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074872   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.074989   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.075156   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.075346   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.075358   63744 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:16:48.187275   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:16:48.187302   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187600   63744 buildroot.go:166] provisioning hostname "embed-certs-503330"
	I1009 20:16:48.187624   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.187763   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.190220   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190585   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.190606   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.190736   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.190932   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191110   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.191251   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.191400   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.191608   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.191629   63744 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-503330 && echo "embed-certs-503330" | sudo tee /etc/hostname
	I1009 20:16:48.321932   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-503330
	
	I1009 20:16:48.321961   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.324976   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.325393   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.325542   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.325720   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.325856   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.326024   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.326360   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.326546   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.326570   63744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-503330' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-503330/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-503330' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:16:49.299713   64109 start.go:364] duration metric: took 3m11.699715872s to acquireMachinesLock for "default-k8s-diff-port-733270"
	I1009 20:16:49.299779   64109 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:16:49.299788   64109 fix.go:54] fixHost starting: 
	I1009 20:16:49.300158   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:16:49.300205   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:16:49.319769   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1009 20:16:49.320201   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:16:49.320678   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:16:49.320704   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:16:49.321107   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:16:49.321301   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:16:49.321463   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:16:49.322908   64109 fix.go:112] recreateIfNeeded on default-k8s-diff-port-733270: state=Stopped err=<nil>
	I1009 20:16:49.322943   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	W1009 20:16:49.323098   64109 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:16:49.324952   64109 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-733270" ...
	I1009 20:16:48.448176   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:16:48.448210   63744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:16:48.448243   63744 buildroot.go:174] setting up certificates
	I1009 20:16:48.448254   63744 provision.go:84] configureAuth start
	I1009 20:16:48.448267   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetMachineName
	I1009 20:16:48.448531   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:48.450984   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451384   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.451422   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.451479   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.453759   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454080   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.454106   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.454202   63744 provision.go:143] copyHostCerts
	I1009 20:16:48.454273   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:16:48.454283   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:16:48.454362   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:16:48.454505   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:16:48.454517   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:16:48.454565   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:16:48.454650   63744 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:16:48.454660   63744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:16:48.454696   63744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:16:48.454767   63744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.embed-certs-503330 san=[127.0.0.1 192.168.50.97 embed-certs-503330 localhost minikube]
	I1009 20:16:48.669251   63744 provision.go:177] copyRemoteCerts
	I1009 20:16:48.669335   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:16:48.669373   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.671969   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672231   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.672258   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.672435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.672629   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.672739   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.672856   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:48.756869   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:16:48.781853   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:16:48.805746   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:16:48.828729   63744 provision.go:87] duration metric: took 380.461988ms to configureAuth
	I1009 20:16:48.828774   63744 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:16:48.828972   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:16:48.829053   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:48.831590   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.831874   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:48.831896   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:48.832085   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:48.832273   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832411   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:48.832545   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:48.832664   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:48.832906   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:48.832928   63744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:16:49.057643   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:16:49.057673   63744 machine.go:96] duration metric: took 986.233627ms to provisionDockerMachine
	I1009 20:16:49.057686   63744 start.go:293] postStartSetup for "embed-certs-503330" (driver="kvm2")
	I1009 20:16:49.057697   63744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:16:49.057713   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.057985   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:16:49.058013   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.060943   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061314   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.061336   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.061544   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.061732   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.061891   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.062024   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.145757   63744 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:16:49.150378   63744 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:16:49.150407   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:16:49.150486   63744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:16:49.150589   63744 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:16:49.150697   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:16:49.160318   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:49.184297   63744 start.go:296] duration metric: took 126.596407ms for postStartSetup
	I1009 20:16:49.184337   63744 fix.go:56] duration metric: took 18.580262238s for fixHost
	I1009 20:16:49.184374   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.186720   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187020   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.187043   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.187243   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.187435   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187571   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.187689   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.187812   63744 main.go:141] libmachine: Using SSH client type: native
	I1009 20:16:49.187993   63744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.97 22 <nil> <nil>}
	I1009 20:16:49.188005   63744 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:16:49.299573   63744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505009.274901835
	
	I1009 20:16:49.299591   63744 fix.go:216] guest clock: 1728505009.274901835
	I1009 20:16:49.299610   63744 fix.go:229] Guest: 2024-10-09 20:16:49.274901835 +0000 UTC Remote: 2024-10-09 20:16:49.184353734 +0000 UTC m=+250.856887553 (delta=90.548101ms)
	I1009 20:16:49.299639   63744 fix.go:200] guest clock delta is within tolerance: 90.548101ms
	I1009 20:16:49.299644   63744 start.go:83] releasing machines lock for "embed-certs-503330", held for 18.695596427s
	I1009 20:16:49.299671   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.299949   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:49.302951   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303308   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.303337   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.303494   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.303952   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304100   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:16:49.304164   63744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:16:49.304213   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.304273   63744 ssh_runner.go:195] Run: cat /version.json
	I1009 20:16:49.304295   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:16:49.306543   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306817   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.306856   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.306901   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307010   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307196   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307365   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.307387   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:49.307404   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:49.307518   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.307612   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:16:49.307778   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:16:49.307974   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:16:49.308128   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:16:49.410624   63744 ssh_runner.go:195] Run: systemctl --version
	I1009 20:16:49.418412   63744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:16:49.567318   63744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:16:49.573238   63744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:16:49.573326   63744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:16:49.589269   63744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:16:49.589292   63744 start.go:495] detecting cgroup driver to use...
	I1009 20:16:49.589361   63744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:16:49.606654   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:16:49.621200   63744 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:16:49.621253   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:16:49.635346   63744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:16:49.649294   63744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:16:49.764096   63744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:16:49.892568   63744 docker.go:233] disabling docker service ...
	I1009 20:16:49.892650   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:16:49.907527   63744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:16:49.920395   63744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:16:50.067177   63744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:16:50.222407   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:16:50.236968   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:16:50.257005   63744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:16:50.257058   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.269955   63744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:16:50.270011   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.282633   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.296259   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.307683   63744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:16:50.320174   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.331518   63744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.350124   63744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:16:50.361327   63744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:16:50.371637   63744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:16:50.371707   63744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:16:50.385652   63744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:16:50.395762   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:50.521257   63744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:16:50.631377   63744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:16:50.631447   63744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:16:50.636594   63744 start.go:563] Will wait 60s for crictl version
	I1009 20:16:50.636643   63744 ssh_runner.go:195] Run: which crictl
	I1009 20:16:50.640677   63744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:16:50.693612   63744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:16:50.693695   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.724735   63744 ssh_runner.go:195] Run: crio --version
	I1009 20:16:50.755820   63744 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:16:49.326372   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Start
	I1009 20:16:49.326507   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring networks are active...
	I1009 20:16:49.327206   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network default is active
	I1009 20:16:49.327553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Ensuring network mk-default-k8s-diff-port-733270 is active
	I1009 20:16:49.327882   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Getting domain xml...
	I1009 20:16:49.328531   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Creating domain...
	I1009 20:16:50.594895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting to get IP...
	I1009 20:16:50.595715   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596086   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.596183   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.596074   65019 retry.go:31] will retry after 205.766462ms: waiting for machine to come up
	I1009 20:16:50.803483   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.803974   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:50.804004   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:50.803914   65019 retry.go:31] will retry after 357.132949ms: waiting for machine to come up
	I1009 20:16:51.162582   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163122   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.163163   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.163072   65019 retry.go:31] will retry after 316.280977ms: waiting for machine to come up
	I1009 20:16:51.480560   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481080   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.481107   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.481029   65019 retry.go:31] will retry after 498.455228ms: waiting for machine to come up
	I1009 20:16:51.980618   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981136   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:51.981165   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:51.981099   65019 retry.go:31] will retry after 595.314117ms: waiting for machine to come up
	I1009 20:16:50.757146   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetIP
	I1009 20:16:50.759889   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760334   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:16:50.760365   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:16:50.760613   63744 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1009 20:16:50.764810   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:50.777746   63744 kubeadm.go:883] updating cluster {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:16:50.777862   63744 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:16:50.777926   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:50.816658   63744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:16:50.816722   63744 ssh_runner.go:195] Run: which lz4
	I1009 20:16:50.820880   63744 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:16:50.825586   63744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:16:50.825614   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:16:52.206757   63744 crio.go:462] duration metric: took 1.385906608s to copy over tarball
	I1009 20:16:52.206837   63744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:16:52.577801   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578322   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:52.578346   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:52.578269   65019 retry.go:31] will retry after 872.123349ms: waiting for machine to come up
	I1009 20:16:53.452602   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453038   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:53.453068   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:53.452984   65019 retry.go:31] will retry after 727.985471ms: waiting for machine to come up
	I1009 20:16:54.182823   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:54.183274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:54.183181   65019 retry.go:31] will retry after 1.366580369s: waiting for machine to come up
	I1009 20:16:55.551983   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:55.552452   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:55.552365   65019 retry.go:31] will retry after 1.327634108s: waiting for machine to come up
	I1009 20:16:56.881693   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882111   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:56.882143   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:56.882061   65019 retry.go:31] will retry after 1.817770667s: waiting for machine to come up
	I1009 20:16:54.208830   63744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.001963207s)
	I1009 20:16:54.208858   63744 crio.go:469] duration metric: took 2.002072256s to extract the tarball
	I1009 20:16:54.208866   63744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:16:54.244727   63744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:16:54.287243   63744 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:16:54.287271   63744 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:16:54.287280   63744 kubeadm.go:934] updating node { 192.168.50.97 8443 v1.31.1 crio true true} ...
	I1009 20:16:54.287407   63744 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-503330 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.97
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:16:54.287496   63744 ssh_runner.go:195] Run: crio config
	I1009 20:16:54.335950   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:16:54.335972   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:16:54.335992   63744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:16:54.336018   63744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.97 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-503330 NodeName:embed-certs-503330 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.97"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.97 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:16:54.336171   63744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.97
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-503330"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.97
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.97"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:16:54.336230   63744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:16:54.346657   63744 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:16:54.346730   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:16:54.356150   63744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:16:54.372246   63744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:16:54.388168   63744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1009 20:16:54.404739   63744 ssh_runner.go:195] Run: grep 192.168.50.97	control-plane.minikube.internal$ /etc/hosts
	I1009 20:16:54.408599   63744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.97	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:16:54.421033   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:16:54.554324   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:16:54.571469   63744 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330 for IP: 192.168.50.97
	I1009 20:16:54.571493   63744 certs.go:194] generating shared ca certs ...
	I1009 20:16:54.571514   63744 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:16:54.571702   63744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:16:54.571755   63744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:16:54.571768   63744 certs.go:256] generating profile certs ...
	I1009 20:16:54.571890   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/client.key
	I1009 20:16:54.571977   63744 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key.3496edbe
	I1009 20:16:54.572035   63744 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key
	I1009 20:16:54.572172   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:16:54.572212   63744 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:16:54.572225   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:16:54.572263   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:16:54.572295   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:16:54.572339   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:16:54.572395   63744 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:16:54.573111   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:16:54.613670   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:16:54.647116   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:16:54.683687   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:16:54.722221   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1009 20:16:54.759929   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:16:54.787802   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:16:54.810019   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/embed-certs-503330/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:16:54.832805   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:16:54.854772   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:16:54.878414   63744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:16:54.901850   63744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:16:54.918260   63744 ssh_runner.go:195] Run: openssl version
	I1009 20:16:54.923815   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:16:54.934350   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938733   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.938799   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:16:54.944372   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:16:54.954950   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:16:54.965726   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970021   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.970081   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:16:54.975568   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:16:54.986392   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:16:54.996852   63744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001051   63744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.001096   63744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:16:55.006579   63744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:16:55.017264   63744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:16:55.021893   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:16:55.027729   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:16:55.033714   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:16:55.039641   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:16:55.045236   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:16:55.050855   63744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:16:55.056748   63744 kubeadm.go:392] StartCluster: {Name:embed-certs-503330 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-503330 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:16:55.056833   63744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:16:55.056882   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.098936   63744 cri.go:89] found id: ""
	I1009 20:16:55.099014   63744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:16:55.109556   63744 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:16:55.109579   63744 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:16:55.109625   63744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:16:55.119379   63744 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:16:55.120348   63744 kubeconfig.go:125] found "embed-certs-503330" server: "https://192.168.50.97:8443"
	I1009 20:16:55.122330   63744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:16:55.131900   63744 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.97
	I1009 20:16:55.131927   63744 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:16:55.131936   63744 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:16:55.131978   63744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:16:55.171019   63744 cri.go:89] found id: ""
	I1009 20:16:55.171090   63744 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:16:55.188501   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:16:55.198221   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:16:55.198244   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:16:55.198304   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:16:55.207327   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:16:55.207371   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:16:55.216598   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:16:55.226558   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:16:55.226618   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:16:55.237485   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.246557   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:16:55.246604   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:16:55.257542   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:16:55.267040   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:16:55.267116   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:16:55.276472   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:16:55.285774   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:55.402155   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.327441   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.559638   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.623281   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:16:56.682538   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:16:56.682638   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.183012   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:57.682740   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.183107   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.702309   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702787   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:16:58.702821   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:16:58.702713   65019 retry.go:31] will retry after 1.927245136s: waiting for machine to come up
	I1009 20:17:00.631448   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631884   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:00.631916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:00.631828   65019 retry.go:31] will retry after 2.288888745s: waiting for machine to come up
	I1009 20:16:58.683664   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:16:58.717388   63744 api_server.go:72] duration metric: took 2.034851204s to wait for apiserver process to appear ...
	I1009 20:16:58.717417   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:16:58.717441   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:16:58.717988   63744 api_server.go:269] stopped: https://192.168.50.97:8443/healthz: Get "https://192.168.50.97:8443/healthz": dial tcp 192.168.50.97:8443: connect: connection refused
	I1009 20:16:59.217777   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.473119   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.473153   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.473179   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.549848   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:01.549880   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:01.718137   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:01.722540   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:01.722571   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.217856   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.222606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:02.222638   63744 api_server.go:103] status: https://192.168.50.97:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:02.718198   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:17:02.723729   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:17:02.729552   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:02.729582   63744 api_server.go:131] duration metric: took 4.01215752s to wait for apiserver health ...
	I1009 20:17:02.729594   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:17:02.729603   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:02.731426   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:02.732669   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:02.743408   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:02.762443   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:02.774604   63744 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:02.774647   63744 system_pods.go:61] "coredns-7c65d6cfc9-df57g" [6d86b5f4-6ab2-4313-9247-f2766bb2cd17] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:02.774666   63744 system_pods.go:61] "etcd-embed-certs-503330" [c3d2f07e-3ea7-41ae-9247-0c79e5aeef7f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:02.774685   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [917f81d6-e4fb-41fe-8051-a1c645e35af8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:02.774693   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [d12d9ad5-e80a-4745-ae2d-3f24965de4d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:02.774706   63744 system_pods.go:61] "kube-proxy-dsh65" [f027d12a-f0b8-45a9-a73d-1afdd80ef7f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:17:02.774718   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [a42cdb71-099c-40a3-b474-ced8659ae391] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:02.774736   63744 system_pods.go:61] "metrics-server-6867b74b74-6z7jj" [58aa0ad3-3210-4722-a579-392688c91bae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:02.774752   63744 system_pods.go:61] "storage-provisioner" [3b0ab765-5bd6-44ac-866e-1c1168ad8ed9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:17:02.774765   63744 system_pods.go:74] duration metric: took 12.298201ms to wait for pod list to return data ...
	I1009 20:17:02.774777   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:02.785857   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:02.785882   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:02.785892   63744 node_conditions.go:105] duration metric: took 11.107216ms to run NodePressure ...
	I1009 20:17:02.785910   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:03.147197   63744 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150727   63744 kubeadm.go:739] kubelet initialised
	I1009 20:17:03.150746   63744 kubeadm.go:740] duration metric: took 3.5247ms waiting for restarted kubelet to initialise ...
	I1009 20:17:03.150753   63744 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:03.155171   63744 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.160022   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160045   63744 pod_ready.go:82] duration metric: took 4.856483ms for pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.160053   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "coredns-7c65d6cfc9-df57g" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.160059   63744 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.165155   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165176   63744 pod_ready.go:82] duration metric: took 5.104415ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.165184   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "etcd-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.165190   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.170669   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170684   63744 pod_ready.go:82] duration metric: took 5.48497ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.170691   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.170697   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.175025   63744 pod_ready.go:98] node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175039   63744 pod_ready.go:82] duration metric: took 4.333372ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:03.175047   63744 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-503330" hosting pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-503330" has status "Ready":"False"
	I1009 20:17:03.175052   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:02.923370   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923752   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:02.923780   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:02.923727   65019 retry.go:31] will retry after 2.87724378s: waiting for machine to come up
	I1009 20:17:05.803251   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803748   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | unable to find current IP address of domain default-k8s-diff-port-733270 in network mk-default-k8s-diff-port-733270
	I1009 20:17:05.803774   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | I1009 20:17:05.803698   65019 retry.go:31] will retry after 5.592307609s: waiting for machine to come up
	I1009 20:17:03.565676   63744 pod_ready.go:93] pod "kube-proxy-dsh65" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:03.565703   63744 pod_ready.go:82] duration metric: took 390.643175ms for pod "kube-proxy-dsh65" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:03.565715   63744 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:05.574374   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:08.072406   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:11.397365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397813   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Found IP for machine: 192.168.72.134
	I1009 20:17:11.397834   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has current primary IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.397840   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserving static IP address...
	I1009 20:17:11.398220   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.398246   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | skip adding static IP to network mk-default-k8s-diff-port-733270 - found existing host DHCP lease matching {name: "default-k8s-diff-port-733270", mac: "52:54:00:b6:c5:b9", ip: "192.168.72.134"}
	I1009 20:17:11.398259   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Reserved static IP address: 192.168.72.134
	I1009 20:17:11.398274   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Waiting for SSH to be available...
	I1009 20:17:11.398291   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Getting to WaitForSSH function...
	I1009 20:17:11.400217   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400530   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.400553   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.400649   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH client type: external
	I1009 20:17:11.400675   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa (-rw-------)
	I1009 20:17:11.400710   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:11.400729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | About to run SSH command:
	I1009 20:17:11.400744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | exit 0
	I1009 20:17:11.526822   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:11.527202   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetConfigRaw
	I1009 20:17:11.527838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.530365   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530702   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.530729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.530978   64109 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/config.json ...
	I1009 20:17:11.531187   64109 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:11.531204   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:11.531388   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.533307   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533646   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.533671   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.533778   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.533949   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534088   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.534181   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.534308   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.534521   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.534535   64109 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:11.643315   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:11.643341   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643558   64109 buildroot.go:166] provisioning hostname "default-k8s-diff-port-733270"
	I1009 20:17:11.643580   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.643746   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.646369   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646741   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.646771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.646919   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.647087   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647249   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.647363   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.647495   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.647698   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.647723   64109 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-733270 && echo "default-k8s-diff-port-733270" | sudo tee /etc/hostname
	I1009 20:17:11.774094   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-733270
	
	I1009 20:17:11.774129   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.776945   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.777318   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.777450   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:11.777637   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777807   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:11.777942   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:11.778077   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:11.778265   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:11.778290   64109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-733270' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-733270/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-733270' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:11.899636   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:11.899666   64109 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:11.899712   64109 buildroot.go:174] setting up certificates
	I1009 20:17:11.899729   64109 provision.go:84] configureAuth start
	I1009 20:17:11.899745   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetMachineName
	I1009 20:17:11.900007   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:11.902313   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902620   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.902647   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.902783   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:11.904665   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.904999   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:11.905028   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:11.905121   64109 provision.go:143] copyHostCerts
	I1009 20:17:11.905194   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:11.905208   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:11.905274   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:11.905389   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:11.905403   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:11.905433   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:11.905506   64109 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:11.905515   64109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:11.905543   64109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:11.905658   64109 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-733270 san=[127.0.0.1 192.168.72.134 default-k8s-diff-port-733270 localhost minikube]
	I1009 20:17:12.089469   64109 provision.go:177] copyRemoteCerts
	I1009 20:17:12.089537   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:12.089563   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.091929   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092210   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.092242   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.092431   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.092601   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.092729   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.092822   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.177787   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1009 20:17:12.201400   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 20:17:12.225416   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:12.247777   64109 provision.go:87] duration metric: took 348.034794ms to configureAuth
	I1009 20:17:12.247801   64109 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:12.247989   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:12.248077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.250489   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.250849   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.250880   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.251083   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.251281   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251515   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.251633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.251786   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.251973   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.251995   64109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:12.475656   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:12.475687   64109 machine.go:96] duration metric: took 944.487945ms to provisionDockerMachine
	I1009 20:17:12.475701   64109 start.go:293] postStartSetup for "default-k8s-diff-port-733270" (driver="kvm2")
	I1009 20:17:12.475714   64109 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:12.475730   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.476033   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:12.476070   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.478464   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478809   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.478838   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.478895   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.479077   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.479198   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.479330   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.719812   64287 start.go:364] duration metric: took 3m28.002029987s to acquireMachinesLock for "old-k8s-version-169021"
	I1009 20:17:12.719868   64287 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:12.719874   64287 fix.go:54] fixHost starting: 
	I1009 20:17:12.720288   64287 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:12.720338   64287 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:12.736888   64287 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39719
	I1009 20:17:12.737330   64287 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:12.737796   64287 main.go:141] libmachine: Using API Version  1
	I1009 20:17:12.737818   64287 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:12.738095   64287 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:12.738284   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:12.738407   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetState
	I1009 20:17:12.740019   64287 fix.go:112] recreateIfNeeded on old-k8s-version-169021: state=Stopped err=<nil>
	I1009 20:17:12.740056   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	W1009 20:17:12.740218   64287 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:12.741971   64287 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-169021" ...
	I1009 20:17:10.572038   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:13.072273   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:12.566216   64109 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:12.570733   64109 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:12.570754   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:12.570811   64109 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:12.570894   64109 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:12.571002   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:12.580485   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:12.604494   64109 start.go:296] duration metric: took 128.779636ms for postStartSetup
	I1009 20:17:12.604528   64109 fix.go:56] duration metric: took 23.304740697s for fixHost
	I1009 20:17:12.604547   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.607253   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607579   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.607611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.607762   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.607941   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608085   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.608190   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.608315   64109 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:12.608524   64109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1009 20:17:12.608542   64109 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:12.719641   64109 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505032.674262019
	
	I1009 20:17:12.719663   64109 fix.go:216] guest clock: 1728505032.674262019
	I1009 20:17:12.719672   64109 fix.go:229] Guest: 2024-10-09 20:17:12.674262019 +0000 UTC Remote: 2024-10-09 20:17:12.604532015 +0000 UTC m=+215.141542026 (delta=69.730004ms)
	I1009 20:17:12.719734   64109 fix.go:200] guest clock delta is within tolerance: 69.730004ms
	I1009 20:17:12.719742   64109 start.go:83] releasing machines lock for "default-k8s-diff-port-733270", held for 23.419984544s
	I1009 20:17:12.719771   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.720009   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:12.722908   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723283   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.723308   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.723449   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724041   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724196   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:17:12.724276   64109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:12.724314   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.724356   64109 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:12.724376   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:17:12.726747   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727051   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727098   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727176   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727264   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727422   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727555   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.727586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:12.727622   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:12.727681   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.727738   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:17:12.727865   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:17:12.727993   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:17:12.728110   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:17:12.808408   64109 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:12.835630   64109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:12.989949   64109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:12.995824   64109 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:12.995893   64109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:13.011680   64109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:13.011707   64109 start.go:495] detecting cgroup driver to use...
	I1009 20:17:13.011774   64109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:13.027110   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:13.040097   64109 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:13.040198   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:13.054001   64109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:13.068380   64109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:13.190626   64109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:13.367857   64109 docker.go:233] disabling docker service ...
	I1009 20:17:13.367921   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:13.385929   64109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:13.403253   64109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:13.528117   64109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:13.663611   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:13.679242   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:13.699707   64109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:13.699775   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.710685   64109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:13.710749   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.722116   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.732987   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.744601   64109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:13.755998   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.768759   64109 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.788295   64109 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:13.798784   64109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:13.808745   64109 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:13.808810   64109 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:13.823798   64109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:13.834854   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:13.959977   64109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:14.071531   64109 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:14.071613   64109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:14.077348   64109 start.go:563] Will wait 60s for crictl version
	I1009 20:17:14.077412   64109 ssh_runner.go:195] Run: which crictl
	I1009 20:17:14.081272   64109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:14.120851   64109 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:14.120951   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.148588   64109 ssh_runner.go:195] Run: crio --version
	I1009 20:17:14.178661   64109 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:12.743057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .Start
	I1009 20:17:12.743249   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring networks are active...
	I1009 20:17:12.743940   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network default is active
	I1009 20:17:12.744263   64287 main.go:141] libmachine: (old-k8s-version-169021) Ensuring network mk-old-k8s-version-169021 is active
	I1009 20:17:12.744639   64287 main.go:141] libmachine: (old-k8s-version-169021) Getting domain xml...
	I1009 20:17:12.745331   64287 main.go:141] libmachine: (old-k8s-version-169021) Creating domain...
	I1009 20:17:14.013679   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting to get IP...
	I1009 20:17:14.014647   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.015019   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.015101   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.015007   65185 retry.go:31] will retry after 236.047931ms: waiting for machine to come up
	I1009 20:17:14.252239   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.252610   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.252636   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.252568   65185 retry.go:31] will retry after 325.864911ms: waiting for machine to come up
	I1009 20:17:14.580315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.580940   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.580965   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.580878   65185 retry.go:31] will retry after 366.421043ms: waiting for machine to come up
	I1009 20:17:14.179897   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetIP
	I1009 20:17:14.183174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183497   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:17:14.183529   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:17:14.183702   64109 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:14.187948   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:14.201218   64109 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:14.201341   64109 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:14.201381   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:14.237137   64109 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:14.237210   64109 ssh_runner.go:195] Run: which lz4
	I1009 20:17:14.241492   64109 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:14.246237   64109 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:14.246270   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1009 20:17:15.633127   64109 crio.go:462] duration metric: took 1.391666515s to copy over tarball
	I1009 20:17:15.633221   64109 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:15.073427   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.085878   63744 pod_ready.go:103] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:17.574480   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:17.574502   63744 pod_ready.go:82] duration metric: took 14.00878017s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:17.574511   63744 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:14.949258   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:14.949766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:14.949800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:14.949726   65185 retry.go:31] will retry after 498.276481ms: waiting for machine to come up
	I1009 20:17:15.450160   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:15.450601   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:15.450635   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:15.450548   65185 retry.go:31] will retry after 742.118922ms: waiting for machine to come up
	I1009 20:17:16.194707   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.195193   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.195232   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.195137   65185 retry.go:31] will retry after 583.713263ms: waiting for machine to come up
	I1009 20:17:16.780844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:16.781277   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:16.781302   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:16.781215   65185 retry.go:31] will retry after 936.435146ms: waiting for machine to come up
	I1009 20:17:17.719083   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:17.719558   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:17.719588   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:17.719503   65185 retry.go:31] will retry after 1.046822117s: waiting for machine to come up
	I1009 20:17:18.768306   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:18.768844   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:18.768872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:18.768798   65185 retry.go:31] will retry after 1.362599959s: waiting for machine to come up
	I1009 20:17:17.738682   64109 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.10542583s)
	I1009 20:17:17.738724   64109 crio.go:469] duration metric: took 2.105568099s to extract the tarball
	I1009 20:17:17.738733   64109 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:17.779611   64109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:17.834267   64109 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 20:17:17.834291   64109 cache_images.go:84] Images are preloaded, skipping loading
	I1009 20:17:17.834299   64109 kubeadm.go:934] updating node { 192.168.72.134 8444 v1.31.1 crio true true} ...
	I1009 20:17:17.834384   64109 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-733270 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:17.834449   64109 ssh_runner.go:195] Run: crio config
	I1009 20:17:17.879236   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:17.879265   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:17.879286   64109 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:17.879306   64109 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-733270 NodeName:default-k8s-diff-port-733270 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:17:17.879467   64109 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-733270"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:17.879531   64109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:17:17.889847   64109 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:17.889945   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:17.899292   64109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1009 20:17:17.915656   64109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:17.931802   64109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1009 20:17:17.949046   64109 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:17.953042   64109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:17.966741   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:18.099697   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:18.120535   64109 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270 for IP: 192.168.72.134
	I1009 20:17:18.120555   64109 certs.go:194] generating shared ca certs ...
	I1009 20:17:18.120570   64109 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:18.120700   64109 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:18.120734   64109 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:18.120743   64109 certs.go:256] generating profile certs ...
	I1009 20:17:18.120813   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.key
	I1009 20:17:18.120867   64109 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key.a935be89
	I1009 20:17:18.120910   64109 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key
	I1009 20:17:18.121023   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:18.121053   64109 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:18.121065   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:18.121107   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:18.121131   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:18.121165   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:18.121217   64109 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:18.121886   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:18.185147   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:18.221038   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:18.252242   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:18.295828   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1009 20:17:18.323898   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:18.348575   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:18.372580   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:18.396351   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:18.420726   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:18.444717   64109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:18.469594   64109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:18.485908   64109 ssh_runner.go:195] Run: openssl version
	I1009 20:17:18.492283   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:18.503167   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507900   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.507952   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:18.513847   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:18.524101   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:18.534793   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539332   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.539410   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:18.545077   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:18.555669   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:18.570727   64109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576515   64109 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.576585   64109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:18.582738   64109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:18.593855   64109 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:18.598553   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:18.604755   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:18.611554   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:18.617835   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:18.623671   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:18.629288   64109 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:18.634887   64109 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-733270 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-733270 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:18.634994   64109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:18.635040   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.676211   64109 cri.go:89] found id: ""
	I1009 20:17:18.676309   64109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:18.686685   64109 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:18.686706   64109 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:18.686758   64109 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:18.696573   64109 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:18.697474   64109 kubeconfig.go:125] found "default-k8s-diff-port-733270" server: "https://192.168.72.134:8444"
	I1009 20:17:18.699424   64109 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:18.708661   64109 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.134
	I1009 20:17:18.708693   64109 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:18.708705   64109 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:18.708756   64109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:18.747781   64109 cri.go:89] found id: ""
	I1009 20:17:18.747852   64109 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:18.765293   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:18.776296   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:18.776315   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:18.776363   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:17:18.785075   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:18.785132   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:18.794089   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:17:18.802663   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:18.802710   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:18.811834   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.820562   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:18.820611   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:18.829603   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:17:18.838162   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:18.838214   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:18.847131   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:18.856597   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:18.963398   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.093311   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.129878409s)
	I1009 20:17:20.093347   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.311144   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.405808   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:20.500323   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:20.500417   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.001420   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:21.501473   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.000842   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:19.581480   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:22.081200   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:20.133416   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:20.133841   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:20.133872   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:20.133789   65185 retry.go:31] will retry after 1.900366713s: waiting for machine to come up
	I1009 20:17:22.036076   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:22.036465   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:22.036499   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:22.036421   65185 retry.go:31] will retry after 2.419471311s: waiting for machine to come up
	I1009 20:17:24.458015   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:24.458410   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:24.458441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:24.458379   65185 retry.go:31] will retry after 2.284501028s: waiting for machine to come up
	I1009 20:17:22.500576   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:22.517320   64109 api_server.go:72] duration metric: took 2.016990608s to wait for apiserver process to appear ...
	I1009 20:17:22.517349   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:17:22.517371   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.392466   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.392500   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.392516   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.432214   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:17:25.432243   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:17:25.518413   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:25.537284   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:25.537328   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.017494   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.022548   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.022581   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:26.518206   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:26.523173   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:17:26.523198   64109 api_server.go:103] status: https://192.168.72.134:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:17:27.017735   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:17:27.022557   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:17:27.031462   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:17:27.031486   64109 api_server.go:131] duration metric: took 4.514131072s to wait for apiserver health ...
	I1009 20:17:27.031494   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:17:27.031500   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:27.033659   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:17:27.035055   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:17:27.045141   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:17:27.062887   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:17:27.070777   64109 system_pods.go:59] 8 kube-system pods found
	I1009 20:17:27.070810   64109 system_pods.go:61] "coredns-7c65d6cfc9-vz7nx" [c9474b15-ac87-4b81-a239-6f4f3563c708] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:17:27.070820   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [ef686f1a-21a5-4058-b8ca-6e719415d778] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:17:27.070833   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [60a13042-6ddb-41c9-993b-a351aad64ec3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:17:27.070842   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [d876ca14-7014-4891-965a-83cadccc4416] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:17:27.070848   64109 system_pods.go:61] "kube-proxy-zr4bl" [4545b380-2d43-415a-97aa-c245a19d8aff] Running
	I1009 20:17:27.070859   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [d2ff89d7-03cf-430c-aa64-278d800d7fa4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:17:27.070870   64109 system_pods.go:61] "metrics-server-6867b74b74-8p24l" [133ac2dc-236a-4ad6-886a-33b132ff5b65] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:17:27.070890   64109 system_pods.go:61] "storage-provisioner" [b82a4bd2-62d3-4eee-b17c-c0ae22b2bd3b] Running
	I1009 20:17:27.070902   64109 system_pods.go:74] duration metric: took 7.993626ms to wait for pod list to return data ...
	I1009 20:17:27.070914   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:17:27.074265   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:17:27.074290   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:17:27.074301   64109 node_conditions.go:105] duration metric: took 3.379591ms to run NodePressure ...
	I1009 20:17:27.074327   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:27.337687   64109 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342418   64109 kubeadm.go:739] kubelet initialised
	I1009 20:17:27.342438   64109 kubeadm.go:740] duration metric: took 4.72219ms waiting for restarted kubelet to initialise ...
	I1009 20:17:27.342446   64109 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:17:27.347265   64109 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.351569   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351587   64109 pod_ready.go:82] duration metric: took 4.298933ms for pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.351595   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "coredns-7c65d6cfc9-vz7nx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.351600   64109 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.355636   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355657   64109 pod_ready.go:82] duration metric: took 4.050576ms for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.355666   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.355672   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.359739   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359758   64109 pod_ready.go:82] duration metric: took 4.080099ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.359767   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.359773   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.466469   64109 pod_ready.go:98] node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466514   64109 pod_ready.go:82] duration metric: took 106.729243ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	E1009 20:17:27.466530   64109 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-733270" hosting pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-733270" has status "Ready":"False"
	I1009 20:17:27.466546   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:24.081959   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.581477   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:26.744084   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:26.744443   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:26.744468   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:26.744421   65185 retry.go:31] will retry after 2.772640247s: waiting for machine to come up
	I1009 20:17:29.519542   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:29.519877   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | unable to find current IP address of domain old-k8s-version-169021 in network mk-old-k8s-version-169021
	I1009 20:17:29.519897   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | I1009 20:17:29.519854   65185 retry.go:31] will retry after 5.534511505s: waiting for machine to come up
	I1009 20:17:27.866362   64109 pod_ready.go:93] pod "kube-proxy-zr4bl" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:27.866389   64109 pod_ready.go:82] duration metric: took 399.82454ms for pod "kube-proxy-zr4bl" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:27.866401   64109 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:29.872414   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.872979   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:29.081836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:31.580784   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.520055   63427 start.go:364] duration metric: took 1m0.914393022s to acquireMachinesLock for "no-preload-480205"
	I1009 20:17:36.520112   63427 start.go:96] Skipping create...Using existing machine configuration
	I1009 20:17:36.520120   63427 fix.go:54] fixHost starting: 
	I1009 20:17:36.520550   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:17:36.520590   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:17:36.541113   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33323
	I1009 20:17:36.541505   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:17:36.542133   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:17:36.542161   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:17:36.542522   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:17:36.542701   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:36.542849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:17:36.544749   63427 fix.go:112] recreateIfNeeded on no-preload-480205: state=Stopped err=<nil>
	I1009 20:17:36.544774   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	W1009 20:17:36.544962   63427 fix.go:138] unexpected machine state, will restart: <nil>
	I1009 20:17:36.546948   63427 out.go:177] * Restarting existing kvm2 VM for "no-preload-480205" ...
	I1009 20:17:34.373083   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.373497   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:35.056703   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057338   64287 main.go:141] libmachine: (old-k8s-version-169021) Found IP for machine: 192.168.61.119
	I1009 20:17:35.057370   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has current primary IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.057378   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserving static IP address...
	I1009 20:17:35.057996   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.058019   64287 main.go:141] libmachine: (old-k8s-version-169021) Reserved static IP address: 192.168.61.119
	I1009 20:17:35.058036   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | skip adding static IP to network mk-old-k8s-version-169021 - found existing host DHCP lease matching {name: "old-k8s-version-169021", mac: "52:54:00:67:df:c3", ip: "192.168.61.119"}
	I1009 20:17:35.058052   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Getting to WaitForSSH function...
	I1009 20:17:35.058069   64287 main.go:141] libmachine: (old-k8s-version-169021) Waiting for SSH to be available...
	I1009 20:17:35.060324   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060560   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.060586   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.060678   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH client type: external
	I1009 20:17:35.060702   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa (-rw-------)
	I1009 20:17:35.060735   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.119 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:35.060750   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | About to run SSH command:
	I1009 20:17:35.060766   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | exit 0
	I1009 20:17:35.183369   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:35.183732   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetConfigRaw
	I1009 20:17:35.184294   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.186404   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186691   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.186728   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.186912   64287 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/config.json ...
	I1009 20:17:35.187139   64287 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:35.187158   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:35.187361   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.189504   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189784   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.189814   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.189904   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.190057   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190169   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.190309   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.190422   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.190610   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.190626   64287 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:35.295510   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:35.295543   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295782   64287 buildroot.go:166] provisioning hostname "old-k8s-version-169021"
	I1009 20:17:35.295804   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.295994   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.298548   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.298930   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.298964   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.299120   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.299266   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299418   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.299547   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.299737   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.299899   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.299912   64287 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-169021 && echo "old-k8s-version-169021" | sudo tee /etc/hostname
	I1009 20:17:35.426217   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169021
	
	I1009 20:17:35.426246   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.428993   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429315   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.429348   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.429554   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.429728   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.429885   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.430012   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.430164   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:35.430365   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:35.430391   64287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-169021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-169021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:35.544070   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:35.544098   64287 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:35.544136   64287 buildroot.go:174] setting up certificates
	I1009 20:17:35.544146   64287 provision.go:84] configureAuth start
	I1009 20:17:35.544155   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetMachineName
	I1009 20:17:35.544420   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:35.547109   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547419   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.547451   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.547618   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.549441   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549724   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.549757   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.549894   64287 provision.go:143] copyHostCerts
	I1009 20:17:35.549945   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:35.549955   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:35.550007   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:35.550109   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:35.550119   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:35.550139   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:35.550201   64287 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:35.550207   64287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:35.550224   64287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:35.550274   64287 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169021 san=[127.0.0.1 192.168.61.119 localhost minikube old-k8s-version-169021]
	I1009 20:17:35.892413   64287 provision.go:177] copyRemoteCerts
	I1009 20:17:35.892470   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:35.892492   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:35.894921   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895231   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:35.895262   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:35.895409   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:35.895585   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:35.895750   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:35.895870   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:35.978537   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:36.003667   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1009 20:17:36.029724   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:36.053321   64287 provision.go:87] duration metric: took 509.163583ms to configureAuth
	I1009 20:17:36.053347   64287 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:36.053517   64287 config.go:182] Loaded profile config "old-k8s-version-169021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:17:36.053589   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.056411   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.056740   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.056769   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.057023   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.057214   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057396   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.057533   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.057684   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.057847   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.057862   64287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:36.281284   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:36.281316   64287 machine.go:96] duration metric: took 1.094164441s to provisionDockerMachine
	I1009 20:17:36.281327   64287 start.go:293] postStartSetup for "old-k8s-version-169021" (driver="kvm2")
	I1009 20:17:36.281339   64287 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:36.281386   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.281686   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:36.281711   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.284445   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284800   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.284825   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.284990   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.285132   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.285255   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.285405   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.370146   64287 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:36.374951   64287 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:36.374972   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:36.375040   64287 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:36.375158   64287 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:36.375286   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:36.384857   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:36.407811   64287 start.go:296] duration metric: took 126.472907ms for postStartSetup
	I1009 20:17:36.407852   64287 fix.go:56] duration metric: took 23.68797707s for fixHost
	I1009 20:17:36.407875   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.410584   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.410949   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.410979   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.411118   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.411292   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411461   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.411593   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.411768   64287 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:36.411943   64287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.119 22 <nil> <nil>}
	I1009 20:17:36.411966   64287 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:36.519849   64287 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505056.472929841
	
	I1009 20:17:36.519877   64287 fix.go:216] guest clock: 1728505056.472929841
	I1009 20:17:36.519887   64287 fix.go:229] Guest: 2024-10-09 20:17:36.472929841 +0000 UTC Remote: 2024-10-09 20:17:36.407856716 +0000 UTC m=+231.827419064 (delta=65.073125ms)
	I1009 20:17:36.519944   64287 fix.go:200] guest clock delta is within tolerance: 65.073125ms
	I1009 20:17:36.519956   64287 start.go:83] releasing machines lock for "old-k8s-version-169021", held for 23.800110205s
	I1009 20:17:36.520000   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.520321   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:36.523296   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523653   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.523701   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.523890   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524453   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524658   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .DriverName
	I1009 20:17:36.524781   64287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:36.524822   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.524855   64287 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:36.524883   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHHostname
	I1009 20:17:36.527948   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528030   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528336   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528362   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528389   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:36.528414   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:36.528670   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528681   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHPort
	I1009 20:17:36.528874   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.528880   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHKeyPath
	I1009 20:17:36.529031   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529035   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetSSHUsername
	I1009 20:17:36.529170   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.529191   64287 sshutil.go:53] new ssh client: &{IP:192.168.61.119 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/old-k8s-version-169021/id_rsa Username:docker}
	I1009 20:17:36.634262   64287 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:36.640126   64287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:36.794481   64287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:36.801536   64287 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:36.801615   64287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:36.825211   64287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:36.825237   64287 start.go:495] detecting cgroup driver to use...
	I1009 20:17:36.825299   64287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:36.842016   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:36.861052   64287 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:36.861112   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:36.878185   64287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:36.892044   64287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:37.010989   64287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:37.181313   64287 docker.go:233] disabling docker service ...
	I1009 20:17:37.181373   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:37.201726   64287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:37.218403   64287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:37.330869   64287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:37.458670   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:37.474832   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:37.496062   64287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 20:17:37.496111   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.509926   64287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:37.509984   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.527671   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.543857   64287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:37.554871   64287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:37.566057   64287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:37.578675   64287 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:37.578757   64287 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:37.593475   64287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:37.608210   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:37.756273   64287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:37.857693   64287 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:37.857759   64287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:37.863522   64287 start.go:563] Will wait 60s for crictl version
	I1009 20:17:37.863561   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:37.868216   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:37.908445   64287 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:37.908519   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.939400   64287 ssh_runner.go:195] Run: crio --version
	I1009 20:17:37.971447   64287 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1009 20:17:36.548231   63427 main.go:141] libmachine: (no-preload-480205) Calling .Start
	I1009 20:17:36.548387   63427 main.go:141] libmachine: (no-preload-480205) Ensuring networks are active...
	I1009 20:17:36.549099   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network default is active
	I1009 20:17:36.549384   63427 main.go:141] libmachine: (no-preload-480205) Ensuring network mk-no-preload-480205 is active
	I1009 20:17:36.549760   63427 main.go:141] libmachine: (no-preload-480205) Getting domain xml...
	I1009 20:17:36.550533   63427 main.go:141] libmachine: (no-preload-480205) Creating domain...
	I1009 20:17:37.839932   63427 main.go:141] libmachine: (no-preload-480205) Waiting to get IP...
	I1009 20:17:37.840843   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:37.841295   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:37.841405   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:37.841286   65353 retry.go:31] will retry after 306.803832ms: waiting for machine to come up
	I1009 20:17:33.581531   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:36.080661   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:38.083154   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:37.972687   64287 main.go:141] libmachine: (old-k8s-version-169021) Calling .GetIP
	I1009 20:17:37.975928   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976352   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:df:c3", ip: ""} in network mk-old-k8s-version-169021: {Iface:virbr3 ExpiryTime:2024-10-09 21:17:24 +0000 UTC Type:0 Mac:52:54:00:67:df:c3 Iaid: IPaddr:192.168.61.119 Prefix:24 Hostname:old-k8s-version-169021 Clientid:01:52:54:00:67:df:c3}
	I1009 20:17:37.976382   64287 main.go:141] libmachine: (old-k8s-version-169021) DBG | domain old-k8s-version-169021 has defined IP address 192.168.61.119 and MAC address 52:54:00:67:df:c3 in network mk-old-k8s-version-169021
	I1009 20:17:37.976637   64287 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:37.980809   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:37.993206   64287 kubeadm.go:883] updating cluster {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:37.993359   64287 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 20:17:37.993402   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:38.043755   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:38.043813   64287 ssh_runner.go:195] Run: which lz4
	I1009 20:17:38.048189   64287 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1009 20:17:38.052553   64287 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 20:17:38.052584   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1009 20:17:38.374526   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.376238   64109 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:40.874242   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:17:40.874269   64109 pod_ready.go:82] duration metric: took 13.007861108s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:40.874282   64109 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	I1009 20:17:38.149878   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.150291   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.150317   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.150240   65353 retry.go:31] will retry after 331.657929ms: waiting for machine to come up
	I1009 20:17:38.483773   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.484236   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.484259   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.484184   65353 retry.go:31] will retry after 320.466882ms: waiting for machine to come up
	I1009 20:17:38.806862   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:38.807342   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:38.807370   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:38.807304   65353 retry.go:31] will retry after 515.558491ms: waiting for machine to come up
	I1009 20:17:39.324105   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:39.324656   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:39.324687   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:39.324624   65353 retry.go:31] will retry after 742.624052ms: waiting for machine to come up
	I1009 20:17:40.068871   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.069333   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.069361   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.069242   65353 retry.go:31] will retry after 627.591329ms: waiting for machine to come up
	I1009 20:17:40.698046   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:40.698539   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:40.698590   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:40.698482   65353 retry.go:31] will retry after 1.099340902s: waiting for machine to come up
	I1009 20:17:41.799879   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:41.800304   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:41.800334   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:41.800260   65353 retry.go:31] will retry after 954.068599ms: waiting for machine to come up
	I1009 20:17:42.756258   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:42.756730   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:42.756756   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:42.756692   65353 retry.go:31] will retry after 1.483165135s: waiting for machine to come up
	I1009 20:17:40.581834   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:42.583105   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:39.710338   64287 crio.go:462] duration metric: took 1.662187364s to copy over tarball
	I1009 20:17:39.710411   64287 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 20:17:42.694067   64287 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.983621241s)
	I1009 20:17:42.694097   64287 crio.go:469] duration metric: took 2.98372831s to extract the tarball
	I1009 20:17:42.694106   64287 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 20:17:42.739749   64287 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:42.782349   64287 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1009 20:17:42.782374   64287 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:42.782447   64287 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.782474   64287 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.782512   64287 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.782544   64287 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1009 20:17:42.782549   64287 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.782732   64287 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.782486   64287 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.782788   64287 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.784992   64287 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.785024   64287 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.784995   64287 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:42.785000   64287 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:42.785007   64287 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.785070   64287 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 20:17:42.785030   64287 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:42.785471   64287 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:42.936283   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:42.937808   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:42.960488   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:42.971814   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1009 20:17:42.977796   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.004153   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.014701   64287 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1009 20:17:43.014748   64287 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.014795   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.025133   64287 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1009 20:17:43.025170   64287 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.025204   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086484   64287 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1009 20:17:43.086512   64287 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1009 20:17:43.086532   64287 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.086541   64287 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 20:17:43.086579   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.086581   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.097814   64287 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1009 20:17:43.097859   64287 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.097909   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103497   64287 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1009 20:17:43.103532   64287 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.103548   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.103569   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.103677   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.103745   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.103799   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.105640   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.203854   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.220635   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.220670   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.220793   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.232794   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.232901   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.232905   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.389992   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1009 20:17:43.390038   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1009 20:17:43.389991   64287 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1009 20:17:43.390081   64287 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.390097   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.390112   64287 ssh_runner.go:195] Run: which crictl
	I1009 20:17:43.390166   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 20:17:43.390187   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1009 20:17:43.390247   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1009 20:17:43.475244   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1009 20:17:43.536485   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1009 20:17:43.536569   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1009 20:17:43.538738   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1009 20:17:43.538812   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1009 20:17:43.538863   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1009 20:17:43.538880   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.597357   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1009 20:17:43.597449   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.630702   64287 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1009 20:17:43.668841   64287 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1009 20:17:44.007657   64287 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:44.151174   64287 cache_images.go:92] duration metric: took 1.368780539s to LoadCachedImages
	W1009 20:17:44.151263   64287 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1009 20:17:44.151285   64287 kubeadm.go:934] updating node { 192.168.61.119 8443 v1.20.0 crio true true} ...
	I1009 20:17:44.151432   64287 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-169021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.119
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:17:44.151500   64287 ssh_runner.go:195] Run: crio config
	I1009 20:17:44.208126   64287 cni.go:84] Creating CNI manager for ""
	I1009 20:17:44.208148   64287 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:17:44.208165   64287 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:17:44.208183   64287 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.119 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169021 NodeName:old-k8s-version-169021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.119"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.119 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 20:17:44.208361   64287 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.119
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-169021"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.119
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.119"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:17:44.208437   64287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1009 20:17:44.218743   64287 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:17:44.218813   64287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:17:44.228160   64287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 20:17:44.245304   64287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:17:44.262787   64287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 20:17:44.280742   64287 ssh_runner.go:195] Run: grep 192.168.61.119	control-plane.minikube.internal$ /etc/hosts
	I1009 20:17:44.285502   64287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.119	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:44.299434   64287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:44.427216   64287 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:17:44.445239   64287 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021 for IP: 192.168.61.119
	I1009 20:17:44.445262   64287 certs.go:194] generating shared ca certs ...
	I1009 20:17:44.445282   64287 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:44.445454   64287 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:17:44.445516   64287 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:17:44.445538   64287 certs.go:256] generating profile certs ...
	I1009 20:17:44.445663   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.key
	I1009 20:17:44.445728   64287 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key.f77cd192
	I1009 20:17:44.445780   64287 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key
	I1009 20:17:44.445920   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:17:44.445961   64287 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:17:44.445976   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:17:44.446008   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:17:44.446041   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:17:44.446074   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:17:44.446130   64287 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:44.446993   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:17:44.498205   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:17:44.525945   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:17:44.572216   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:17:44.614281   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1009 20:17:42.881058   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:45.654206   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.242356   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:44.242846   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:44.242873   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:44.242792   65353 retry.go:31] will retry after 1.589482004s: waiting for machine to come up
	I1009 20:17:45.834679   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:45.835135   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:45.835176   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:45.835093   65353 retry.go:31] will retry after 1.757206304s: waiting for machine to come up
	I1009 20:17:47.593468   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:47.593954   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:47.593987   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:47.593889   65353 retry.go:31] will retry after 2.938063418s: waiting for machine to come up
	I1009 20:17:45.082377   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:47.581271   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:44.661644   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 20:17:44.695246   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:17:44.719043   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 20:17:44.743825   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:17:44.768013   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:17:44.793698   64287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:17:44.819945   64287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:17:44.840340   64287 ssh_runner.go:195] Run: openssl version
	I1009 20:17:44.847883   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:17:44.858853   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863657   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.863707   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:17:44.871190   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:17:44.885414   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:17:44.900030   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904894   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.904958   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:17:44.912406   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:17:44.925128   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:17:44.936358   64287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940937   64287 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.940995   64287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:17:44.946995   64287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:17:44.958154   64287 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:17:44.962846   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:17:44.968749   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:17:44.974659   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:17:44.980867   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:17:44.986827   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:17:44.992741   64287 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:17:44.998932   64287 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-169021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.119 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:17:44.999030   64287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:17:44.999107   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.037766   64287 cri.go:89] found id: ""
	I1009 20:17:45.037847   64287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:17:45.050640   64287 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:17:45.050661   64287 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:17:45.050717   64287 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:17:45.061420   64287 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:17:45.062835   64287 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169021" does not appear in /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:17:45.063886   64287 kubeconfig.go:62] /home/jenkins/minikube-integration/19780-9412/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169021" cluster setting kubeconfig missing "old-k8s-version-169021" context setting]
	I1009 20:17:45.065224   64287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:17:45.137319   64287 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:17:45.149285   64287 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.119
	I1009 20:17:45.149318   64287 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:17:45.149331   64287 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:17:45.149386   64287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:17:45.191415   64287 cri.go:89] found id: ""
	I1009 20:17:45.191494   64287 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:17:45.208982   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:17:45.219143   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:17:45.219166   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:17:45.219219   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:17:45.229113   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:17:45.229199   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:17:45.239745   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:17:45.249766   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:17:45.249844   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:17:45.260185   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.271441   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:17:45.271500   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:17:45.281343   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:17:45.291026   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:17:45.291094   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:17:45.301052   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:17:45.311369   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:45.520151   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.097892   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.359594   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.466328   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:17:46.574255   64287 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:17:46.574365   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.574634   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.074595   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:48.575187   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.074428   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:49.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:47.880869   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:49.881585   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.381306   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.535997   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:50.536376   63427 main.go:141] libmachine: (no-preload-480205) DBG | unable to find current IP address of domain no-preload-480205 in network mk-no-preload-480205
	I1009 20:17:50.536400   63427 main.go:141] libmachine: (no-preload-480205) DBG | I1009 20:17:50.536340   65353 retry.go:31] will retry after 3.744305095s: waiting for machine to come up
	I1009 20:17:49.581868   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:52.080469   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:50.075027   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:50.575160   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.075457   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:51.574838   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.075036   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:52.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.075071   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:53.575204   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.074552   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.574415   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:54.284206   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.284770   63427 main.go:141] libmachine: (no-preload-480205) Found IP for machine: 192.168.39.162
	I1009 20:17:54.284795   63427 main.go:141] libmachine: (no-preload-480205) Reserving static IP address...
	I1009 20:17:54.284809   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has current primary IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.285276   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.285315   63427 main.go:141] libmachine: (no-preload-480205) DBG | skip adding static IP to network mk-no-preload-480205 - found existing host DHCP lease matching {name: "no-preload-480205", mac: "52:54:00:1d:fc:59", ip: "192.168.39.162"}
	I1009 20:17:54.285330   63427 main.go:141] libmachine: (no-preload-480205) Reserved static IP address: 192.168.39.162
	I1009 20:17:54.285344   63427 main.go:141] libmachine: (no-preload-480205) Waiting for SSH to be available...
	I1009 20:17:54.285356   63427 main.go:141] libmachine: (no-preload-480205) DBG | Getting to WaitForSSH function...
	I1009 20:17:54.287561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287809   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.287838   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.287920   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH client type: external
	I1009 20:17:54.287947   63427 main.go:141] libmachine: (no-preload-480205) DBG | Using SSH private key: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa (-rw-------)
	I1009 20:17:54.287988   63427 main.go:141] libmachine: (no-preload-480205) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.162 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1009 20:17:54.288001   63427 main.go:141] libmachine: (no-preload-480205) DBG | About to run SSH command:
	I1009 20:17:54.288014   63427 main.go:141] libmachine: (no-preload-480205) DBG | exit 0
	I1009 20:17:54.414835   63427 main.go:141] libmachine: (no-preload-480205) DBG | SSH cmd err, output: <nil>: 
	I1009 20:17:54.415251   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetConfigRaw
	I1009 20:17:54.415965   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.418617   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.418968   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.418992   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.419252   63427 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/config.json ...
	I1009 20:17:54.419452   63427 machine.go:93] provisionDockerMachine start ...
	I1009 20:17:54.419470   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:54.419664   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.421796   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422088   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.422120   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.422233   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.422406   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422550   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.422839   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.423013   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.423242   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.423254   63427 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 20:17:54.531462   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1009 20:17:54.531497   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531718   63427 buildroot.go:166] provisioning hostname "no-preload-480205"
	I1009 20:17:54.531744   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.531956   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.534433   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534788   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.534816   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.534935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.535138   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535286   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.535418   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.535601   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.535774   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.535785   63427 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-480205 && echo "no-preload-480205" | sudo tee /etc/hostname
	I1009 20:17:54.659155   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-480205
	
	I1009 20:17:54.659228   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.661958   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662288   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.662313   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.662511   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.662681   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662842   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.662987   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.663179   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:54.663354   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:54.663370   63427 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-480205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-480205/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-480205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 20:17:54.779856   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 20:17:54.779881   63427 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9412/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9412/.minikube}
	I1009 20:17:54.779916   63427 buildroot.go:174] setting up certificates
	I1009 20:17:54.779926   63427 provision.go:84] configureAuth start
	I1009 20:17:54.779935   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetMachineName
	I1009 20:17:54.780180   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:54.782673   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783013   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.783045   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.783171   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.785450   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785780   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.785807   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.785945   63427 provision.go:143] copyHostCerts
	I1009 20:17:54.786024   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem, removing ...
	I1009 20:17:54.786041   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem
	I1009 20:17:54.786107   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/ca.pem (1082 bytes)
	I1009 20:17:54.786282   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem, removing ...
	I1009 20:17:54.786294   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem
	I1009 20:17:54.786327   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/cert.pem (1123 bytes)
	I1009 20:17:54.786402   63427 exec_runner.go:144] found /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem, removing ...
	I1009 20:17:54.786412   63427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem
	I1009 20:17:54.786439   63427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9412/.minikube/key.pem (1679 bytes)
	I1009 20:17:54.786503   63427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem org=jenkins.no-preload-480205 san=[127.0.0.1 192.168.39.162 localhost minikube no-preload-480205]
	I1009 20:17:54.929212   63427 provision.go:177] copyRemoteCerts
	I1009 20:17:54.929265   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 20:17:54.929292   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:54.931970   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932355   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:54.932402   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:54.932506   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:54.932693   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:54.932849   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:54.932979   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.017690   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1009 20:17:55.042746   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1009 20:17:55.066760   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 20:17:55.094790   63427 provision.go:87] duration metric: took 314.853512ms to configureAuth
	I1009 20:17:55.094830   63427 buildroot.go:189] setting minikube options for container-runtime
	I1009 20:17:55.095022   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:17:55.095125   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.097730   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098041   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.098078   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.098257   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.098452   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098647   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.098764   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.098926   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.099111   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.099129   63427 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 20:17:55.325505   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 20:17:55.325552   63427 machine.go:96] duration metric: took 906.085773ms to provisionDockerMachine
	I1009 20:17:55.325565   63427 start.go:293] postStartSetup for "no-preload-480205" (driver="kvm2")
	I1009 20:17:55.325576   63427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 20:17:55.325596   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.325884   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 20:17:55.325911   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.328326   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328595   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.328622   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.328750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.328920   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.329086   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.329197   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.413322   63427 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 20:17:55.417428   63427 info.go:137] Remote host: Buildroot 2023.02.9
	I1009 20:17:55.417451   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/addons for local assets ...
	I1009 20:17:55.417531   63427 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9412/.minikube/files for local assets ...
	I1009 20:17:55.417634   63427 filesync.go:149] local asset: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem -> 166072.pem in /etc/ssl/certs
	I1009 20:17:55.417758   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 20:17:55.426893   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:17:55.451335   63427 start.go:296] duration metric: took 125.757549ms for postStartSetup
	I1009 20:17:55.451372   63427 fix.go:56] duration metric: took 18.931252408s for fixHost
	I1009 20:17:55.451395   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.453854   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454177   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.454222   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.454403   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.454581   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454734   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.454872   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.455026   63427 main.go:141] libmachine: Using SSH client type: native
	I1009 20:17:55.455241   63427 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.162 22 <nil> <nil>}
	I1009 20:17:55.455254   63427 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1009 20:17:55.564201   63427 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728505075.515960663
	
	I1009 20:17:55.564224   63427 fix.go:216] guest clock: 1728505075.515960663
	I1009 20:17:55.564232   63427 fix.go:229] Guest: 2024-10-09 20:17:55.515960663 +0000 UTC Remote: 2024-10-09 20:17:55.451376872 +0000 UTC m=+362.436821917 (delta=64.583791ms)
	I1009 20:17:55.564249   63427 fix.go:200] guest clock delta is within tolerance: 64.583791ms
	I1009 20:17:55.564254   63427 start.go:83] releasing machines lock for "no-preload-480205", held for 19.044164758s
	I1009 20:17:55.564274   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.564496   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:55.567139   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567524   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.567561   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.567654   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568134   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568307   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:17:55.568372   63427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 20:17:55.568415   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.568499   63427 ssh_runner.go:195] Run: cat /version.json
	I1009 20:17:55.568524   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:17:55.571019   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571293   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571450   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571475   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571592   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571724   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:55.571746   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:55.571750   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.571897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:17:55.571898   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572039   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:17:55.572048   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.572151   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:17:55.572272   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:17:55.651437   63427 ssh_runner.go:195] Run: systemctl --version
	I1009 20:17:55.678289   63427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 20:17:55.826507   63427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1009 20:17:55.832338   63427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1009 20:17:55.832394   63427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 20:17:55.849232   63427 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 20:17:55.849252   63427 start.go:495] detecting cgroup driver to use...
	I1009 20:17:55.849312   63427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 20:17:55.865490   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 20:17:55.880814   63427 docker.go:217] disabling cri-docker service (if available) ...
	I1009 20:17:55.880881   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 20:17:55.895380   63427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 20:17:55.911341   63427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 20:17:56.029690   63427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 20:17:56.206998   63427 docker.go:233] disabling docker service ...
	I1009 20:17:56.207078   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 20:17:56.223617   63427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 20:17:56.236949   63427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 20:17:56.357461   63427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 20:17:56.472412   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 20:17:56.486622   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 20:17:56.505189   63427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 20:17:56.505273   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.515661   63427 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 20:17:56.515714   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.525699   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.535795   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.545864   63427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 20:17:56.555956   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.565864   63427 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.584950   63427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 20:17:56.596337   63427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 20:17:56.605878   63427 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 20:17:56.605945   63427 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 20:17:56.618105   63427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 20:17:56.627474   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:17:56.763925   63427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 20:17:56.866705   63427 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 20:17:56.866766   63427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 20:17:56.871946   63427 start.go:563] Will wait 60s for crictl version
	I1009 20:17:56.871990   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:56.875978   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 20:17:56.920375   63427 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1009 20:17:56.920497   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.950584   63427 ssh_runner.go:195] Run: crio --version
	I1009 20:17:56.983562   63427 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1009 20:17:54.883016   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:57.380454   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.984723   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetIP
	I1009 20:17:56.987544   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.987870   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:17:56.987896   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:17:56.988102   63427 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1009 20:17:56.992229   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:17:57.005052   63427 kubeadm.go:883] updating cluster {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 20:17:57.005203   63427 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 20:17:57.005261   63427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 20:17:57.048383   63427 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1009 20:17:57.048405   63427 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 20:17:57.048449   63427 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.048493   63427 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.048528   63427 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.048551   63427 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1009 20:17:57.048554   63427 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.048460   63427 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.048669   63427 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.048543   63427 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049897   63427 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.049914   63427 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.049917   63427 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.049899   63427 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.049903   63427 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:17:57.049966   63427 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.049968   63427 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1009 20:17:57.210906   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.216003   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.221539   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.238277   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.249962   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.251926   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.264094   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1009 20:17:57.278956   63427 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1009 20:17:57.279003   63427 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.279053   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.326574   63427 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1009 20:17:57.326623   63427 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.326667   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.356980   63427 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1009 20:17:57.356999   63427 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1009 20:17:57.357024   63427 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.357028   63427 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.357079   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.357082   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394166   63427 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1009 20:17:57.394211   63427 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.394308   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.394202   63427 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1009 20:17:57.394363   63427 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.394409   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:57.504627   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.504669   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.504677   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.504795   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.504866   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.504808   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.653815   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.653864   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.653922   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.653938   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.653976   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.654008   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798466   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1009 20:17:57.798526   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1009 20:17:57.798603   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1009 20:17:57.798638   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1009 20:17:57.798712   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1009 20:17:57.798725   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1009 20:17:57.919528   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1009 20:17:57.919602   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1009 20:17:57.919636   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.919668   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:17:57.923759   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1009 20:17:57.923835   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1009 20:17:57.923861   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1009 20:17:57.923841   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:17:57.923900   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:17:57.923908   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1009 20:17:57.923937   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:17:57.923979   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:17:57.933344   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1009 20:17:57.933364   63427 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.933384   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1009 20:17:57.933397   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1009 20:17:57.936970   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1009 20:17:57.937013   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1009 20:17:57.937014   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1009 20:17:57.937039   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1009 20:17:54.082018   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:56.581605   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:55.074932   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:55.575354   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.074536   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:56.575341   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.074580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:57.574737   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.074743   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:58.574712   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.074570   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.575178   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:17:59.381986   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.879741   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:17:58.234930   63427 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.729993   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.796562811s)
	I1009 20:18:01.730032   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1009 20:18:01.730055   63427 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730053   63427 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.495090196s)
	I1009 20:18:01.730094   63427 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1009 20:18:01.730108   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1009 20:18:01.730128   63427 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:01.730171   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:17:59.082693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:01.581215   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:00.075413   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:00.575344   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.074463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:01.574495   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.075077   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:02.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.074427   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.574544   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.075436   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:04.575477   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:03.881048   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.881675   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:03.709225   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.979095477s)
	I1009 20:18:03.709263   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1009 20:18:03.709270   63427 ssh_runner.go:235] Completed: which crictl: (1.979078895s)
	I1009 20:18:03.709293   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709328   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1009 20:18:03.709331   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677348   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.967992224s)
	I1009 20:18:05.677442   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:05.677451   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.968100259s)
	I1009 20:18:05.677472   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1009 20:18:05.677506   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.677576   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1009 20:18:05.717053   63427 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:07.172029   63427 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.454939952s)
	I1009 20:18:07.172088   63427 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 20:18:07.172034   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.49443869s)
	I1009 20:18:07.172161   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1009 20:18:07.172184   63427 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:07.172184   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:07.172274   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1009 20:18:03.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:06.082185   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:05.075031   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:05.574523   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.075121   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:06.575359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.074417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.574532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.075315   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:08.575052   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.075089   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:09.575013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:07.881820   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:09.882824   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:12.381749   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:08.827862   63427 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.655655014s)
	I1009 20:18:08.827897   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.655597185s)
	I1009 20:18:08.827906   63427 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1009 20:18:08.827911   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1009 20:18:08.827943   63427 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:08.828002   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1009 20:18:11.127762   63427 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.299736339s)
	I1009 20:18:11.127795   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1009 20:18:11.127828   63427 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.127896   63427 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1009 20:18:11.778998   63427 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19780-9412/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1009 20:18:11.779046   63427 cache_images.go:123] Successfully loaded all cached images
	I1009 20:18:11.779052   63427 cache_images.go:92] duration metric: took 14.730635989s to LoadCachedImages
	I1009 20:18:11.779086   63427 kubeadm.go:934] updating node { 192.168.39.162 8443 v1.31.1 crio true true} ...
	I1009 20:18:11.779200   63427 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-480205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 20:18:11.779290   63427 ssh_runner.go:195] Run: crio config
	I1009 20:18:11.823810   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:11.823835   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:11.823850   63427 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 20:18:11.823868   63427 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.162 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-480205 NodeName:no-preload-480205 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.162"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.162 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 20:18:11.823998   63427 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.162
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-480205"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.162
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.162"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 20:18:11.824053   63427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 20:18:11.834380   63427 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 20:18:11.834447   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 20:18:11.843217   63427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1009 20:18:11.860171   63427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 20:18:11.877082   63427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I1009 20:18:11.894719   63427 ssh_runner.go:195] Run: grep 192.168.39.162	control-plane.minikube.internal$ /etc/hosts
	I1009 20:18:11.898508   63427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.162	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 20:18:11.910913   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:12.036793   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:12.054850   63427 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205 for IP: 192.168.39.162
	I1009 20:18:12.054872   63427 certs.go:194] generating shared ca certs ...
	I1009 20:18:12.054891   63427 certs.go:226] acquiring lock for ca certs: {Name:mk448dbb50e723bc9ae89422190da94e40f82fbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:12.055079   63427 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key
	I1009 20:18:12.055135   63427 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key
	I1009 20:18:12.055147   63427 certs.go:256] generating profile certs ...
	I1009 20:18:12.055233   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.key
	I1009 20:18:12.055290   63427 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key.d4bac337
	I1009 20:18:12.055346   63427 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key
	I1009 20:18:12.055484   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem (1338 bytes)
	W1009 20:18:12.055518   63427 certs.go:480] ignoring /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607_empty.pem, impossibly tiny 0 bytes
	I1009 20:18:12.055531   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 20:18:12.055563   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/ca.pem (1082 bytes)
	I1009 20:18:12.055589   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/cert.pem (1123 bytes)
	I1009 20:18:12.055622   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/certs/key.pem (1679 bytes)
	I1009 20:18:12.055685   63427 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem (1708 bytes)
	I1009 20:18:12.056362   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 20:18:12.098363   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 20:18:12.138215   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 20:18:12.163505   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 20:18:12.197000   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1009 20:18:12.226922   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 20:18:12.260018   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 20:18:12.283078   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1009 20:18:12.306681   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/ssl/certs/166072.pem --> /usr/share/ca-certificates/166072.pem (1708 bytes)
	I1009 20:18:12.329290   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 20:18:12.351909   63427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9412/.minikube/certs/16607.pem --> /usr/share/ca-certificates/16607.pem (1338 bytes)
	I1009 20:18:12.374738   63427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 20:18:12.392628   63427 ssh_runner.go:195] Run: openssl version
	I1009 20:18:12.398243   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166072.pem && ln -fs /usr/share/ca-certificates/166072.pem /etc/ssl/certs/166072.pem"
	I1009 20:18:12.408796   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413145   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  9 19:06 /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.413227   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166072.pem
	I1009 20:18:12.419056   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166072.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 20:18:12.429807   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 20:18:12.440638   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445248   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:48 /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.445304   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 20:18:12.450971   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 20:18:12.461763   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16607.pem && ln -fs /usr/share/ca-certificates/16607.pem /etc/ssl/certs/16607.pem"
	I1009 20:18:12.472078   63427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476832   63427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  9 19:06 /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.476883   63427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16607.pem
	I1009 20:18:12.482732   63427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16607.pem /etc/ssl/certs/51391683.0"
	I1009 20:18:12.493739   63427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 20:18:12.498128   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1009 20:18:12.504533   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1009 20:18:12.510838   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1009 20:18:12.517106   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1009 20:18:12.522836   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1009 20:18:12.528387   63427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1009 20:18:12.533860   63427 kubeadm.go:392] StartCluster: {Name:no-preload-480205 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-480205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 20:18:12.533939   63427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 20:18:12.533974   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.573392   63427 cri.go:89] found id: ""
	I1009 20:18:12.573459   63427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 20:18:12.584594   63427 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1009 20:18:12.584615   63427 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1009 20:18:12.584660   63427 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1009 20:18:12.595656   63427 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1009 20:18:12.596797   63427 kubeconfig.go:125] found "no-preload-480205" server: "https://192.168.39.162:8443"
	I1009 20:18:12.598877   63427 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1009 20:18:12.608274   63427 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.162
	I1009 20:18:12.608299   63427 kubeadm.go:1160] stopping kube-system containers ...
	I1009 20:18:12.608310   63427 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1009 20:18:12.608369   63427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 20:18:12.644925   63427 cri.go:89] found id: ""
	I1009 20:18:12.644992   63427 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1009 20:18:12.661468   63427 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:18:12.671087   63427 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:18:12.671107   63427 kubeadm.go:157] found existing configuration files:
	
	I1009 20:18:12.671152   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:18:12.679852   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:18:12.679915   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:18:12.688829   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:18:12.697279   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:18:12.697334   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:18:12.705785   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.714620   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:18:12.714657   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:18:12.722966   63427 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:18:12.730999   63427 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:18:12.731047   63427 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:18:12.739970   63427 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:18:12.748980   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:12.857890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:08.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:11.081976   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:10.075093   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:10.574417   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.075214   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:11.574669   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.075388   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:12.575377   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.075087   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:13.574793   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.074494   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.574845   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.880777   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:17.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:13.727010   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:13.942433   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.021021   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:14.144829   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:18:14.144918   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:14.645875   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.145872   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.184998   63427 api_server.go:72] duration metric: took 1.040165861s to wait for apiserver process to appear ...
	I1009 20:18:15.185034   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:18:15.185059   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:15.185680   63427 api_server.go:269] stopped: https://192.168.39.162:8443/healthz: Get "https://192.168.39.162:8443/healthz": dial tcp 192.168.39.162:8443: connect: connection refused
	I1009 20:18:15.685984   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:13.581243   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:16.079884   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:18.081998   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:15.074778   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:15.575349   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.074510   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:16.574830   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.074650   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:17.574725   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.075359   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.575302   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.074611   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:19.575097   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:18.286022   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.286048   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.286066   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.311734   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1009 20:18:18.311764   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1009 20:18:18.685256   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:18.689903   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:18.689930   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.185432   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.191636   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1009 20:18:19.191661   63427 api_server.go:103] status: https://192.168.39.162:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1009 20:18:19.685910   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:18:19.690518   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:18:19.696742   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:18:19.696769   63427 api_server.go:131] duration metric: took 4.511726583s to wait for apiserver health ...
	I1009 20:18:19.696777   63427 cni.go:84] Creating CNI manager for ""
	I1009 20:18:19.696783   63427 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:18:19.698684   63427 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:18:19.700003   63427 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:18:19.712555   63427 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:18:19.731708   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:18:19.740770   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:18:19.740800   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1009 20:18:19.740808   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1009 20:18:19.740817   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1009 20:18:19.740823   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1009 20:18:19.740829   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1009 20:18:19.740835   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1009 20:18:19.740842   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:18:19.740848   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1009 20:18:19.740860   63427 system_pods.go:74] duration metric: took 9.132657ms to wait for pod list to return data ...
	I1009 20:18:19.740867   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:18:19.744292   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:18:19.744314   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:18:19.744329   63427 node_conditions.go:105] duration metric: took 3.45695ms to run NodePressure ...
	I1009 20:18:19.744346   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1009 20:18:20.036577   63427 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040661   63427 kubeadm.go:739] kubelet initialised
	I1009 20:18:20.040683   63427 kubeadm.go:740] duration metric: took 4.08281ms waiting for restarted kubelet to initialise ...
	I1009 20:18:20.040692   63427 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:20.047699   63427 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.052483   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052504   63427 pod_ready.go:82] duration metric: took 4.782367ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.052511   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.052518   63427 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.056863   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056892   63427 pod_ready.go:82] duration metric: took 4.363688ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.056903   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "etcd-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.056911   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.061762   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061786   63427 pod_ready.go:82] duration metric: took 4.867975ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.061796   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-apiserver-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.061804   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.135742   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135769   63427 pod_ready.go:82] duration metric: took 73.952718ms for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.135779   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.135785   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.534419   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534449   63427 pod_ready.go:82] duration metric: took 398.656543ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.534459   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-proxy-vbpbk" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.534466   63427 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:20.935390   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935416   63427 pod_ready.go:82] duration metric: took 400.943577ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:20.935426   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "kube-scheduler-no-preload-480205" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:20.935432   63427 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:21.336052   63427 pod_ready.go:98] node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336081   63427 pod_ready.go:82] duration metric: took 400.640044ms for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:18:21.336093   63427 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-480205" hosting pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:21.336102   63427 pod_ready.go:39] duration metric: took 1.295400779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:21.336122   63427 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:18:21.349596   63427 ops.go:34] apiserver oom_adj: -16
	I1009 20:18:21.349616   63427 kubeadm.go:597] duration metric: took 8.764995466s to restartPrimaryControlPlane
	I1009 20:18:21.349624   63427 kubeadm.go:394] duration metric: took 8.815768617s to StartCluster
	I1009 20:18:21.349639   63427 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.349716   63427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:18:21.351335   63427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:18:21.351607   63427 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.162 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:18:21.351692   63427 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:18:21.351813   63427 addons.go:69] Setting storage-provisioner=true in profile "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting metrics-server=true in profile "no-preload-480205"
	I1009 20:18:21.351832   63427 addons.go:234] Setting addon storage-provisioner=true in "no-preload-480205"
	I1009 20:18:21.351836   63427 addons.go:234] Setting addon metrics-server=true in "no-preload-480205"
	I1009 20:18:21.351821   63427 addons.go:69] Setting default-storageclass=true in profile "no-preload-480205"
	I1009 20:18:21.351845   63427 config.go:182] Loaded profile config "no-preload-480205": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:18:21.351883   63427 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-480205"
	W1009 20:18:21.351840   63427 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:18:21.351986   63427 host.go:66] Checking if "no-preload-480205" exists ...
	W1009 20:18:21.351843   63427 addons.go:243] addon metrics-server should already be in state true
	I1009 20:18:21.352071   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.352345   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352389   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352398   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352424   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.352457   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.352489   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.353957   63427 out.go:177] * Verifying Kubernetes components...
	I1009 20:18:21.355218   63427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:18:21.371429   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I1009 20:18:21.371808   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.372342   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.372372   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.372777   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.372988   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.376878   63427 addons.go:234] Setting addon default-storageclass=true in "no-preload-480205"
	W1009 20:18:21.376899   63427 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:18:21.376926   63427 host.go:66] Checking if "no-preload-480205" exists ...
	I1009 20:18:21.377284   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.377323   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.390054   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I1009 20:18:21.390616   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I1009 20:18:21.391127   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391270   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.391803   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.391830   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392008   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.392033   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.392208   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392359   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.392734   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.392776   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.392957   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.393001   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.397090   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40193
	I1009 20:18:21.397605   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.398086   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.398105   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.398405   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.398921   63427 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:18:21.398966   63427 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:18:21.408719   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I1009 20:18:21.408929   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I1009 20:18:21.409048   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409326   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.409582   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409594   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409876   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.409893   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.409956   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410100   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.410223   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.410564   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.412097   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.412300   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.414239   63427 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:18:21.414326   63427 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:18:19.381608   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:21.415507   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:18:21.415525   63427 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.415530   63427 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:18:21.415536   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:18:21.415548   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.415549   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.417045   63427 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39217
	I1009 20:18:21.417788   63427 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:18:21.418610   63427 main.go:141] libmachine: Using API Version  1
	I1009 20:18:21.418626   63427 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:18:21.418981   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419016   63427 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:18:21.419279   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetState
	I1009 20:18:21.419611   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.419631   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.419760   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.419897   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.420028   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.420123   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.420454   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420758   63427 main.go:141] libmachine: (no-preload-480205) Calling .DriverName
	I1009 20:18:21.420943   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.420963   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.420969   63427 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.420989   63427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:18:21.421002   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHHostname
	I1009 20:18:21.421193   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.421373   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.421545   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.421675   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.423520   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425058   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHPort
	I1009 20:18:21.425099   63427 main.go:141] libmachine: (no-preload-480205) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:fc:59", ip: ""} in network mk-no-preload-480205: {Iface:virbr1 ExpiryTime:2024-10-09 21:17:48 +0000 UTC Type:0 Mac:52:54:00:1d:fc:59 Iaid: IPaddr:192.168.39.162 Prefix:24 Hostname:no-preload-480205 Clientid:01:52:54:00:1d:fc:59}
	I1009 20:18:21.425124   63427 main.go:141] libmachine: (no-preload-480205) DBG | domain no-preload-480205 has defined IP address 192.168.39.162 and MAC address 52:54:00:1d:fc:59 in network mk-no-preload-480205
	I1009 20:18:21.425247   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHKeyPath
	I1009 20:18:21.425381   63427 main.go:141] libmachine: (no-preload-480205) Calling .GetSSHUsername
	I1009 20:18:21.425511   63427 sshutil.go:53] new ssh client: &{IP:192.168.39.162 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/no-preload-480205/id_rsa Username:docker}
	I1009 20:18:21.558337   63427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:18:21.587934   63427 node_ready.go:35] waiting up to 6m0s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:21.692866   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:18:21.705177   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:18:21.705201   63427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:18:21.724872   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:18:21.796761   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:18:21.796789   63427 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:18:21.846162   63427 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:21.846187   63427 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:18:21.880785   63427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:18:22.146852   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.146879   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147190   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147241   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147254   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.147266   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.147280   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.147532   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.147534   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.147591   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.161873   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.161893   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.162134   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.162156   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.162162   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966531   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.24162682s)
	I1009 20:18:22.966588   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966603   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966536   63427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.085706223s)
	I1009 20:18:22.966699   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966712   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.966892   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.966932   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.966939   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.966947   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.966954   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967001   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967020   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967040   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967073   63427 main.go:141] libmachine: Making call to close driver server
	I1009 20:18:22.967086   63427 main.go:141] libmachine: (no-preload-480205) Calling .Close
	I1009 20:18:22.967234   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967258   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967332   63427 main.go:141] libmachine: (no-preload-480205) DBG | Closing plugin on server side
	I1009 20:18:22.967342   63427 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:18:22.967356   63427 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:18:22.967365   63427 addons.go:475] Verifying addon metrics-server=true in "no-preload-480205"
	I1009 20:18:22.969240   63427 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1009 20:18:22.970479   63427 addons.go:510] duration metric: took 1.618800365s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1009 20:18:20.580980   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:22.581407   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:20.075155   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:20.575362   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.074859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:21.574637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.074532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:22.574916   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.075357   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.574640   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.074579   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:24.574711   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:23.879983   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:26.380696   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:23.592071   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:26.091763   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:24.581861   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:27.082730   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:25.075032   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:25.575412   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.075470   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:26.574434   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.074827   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:27.575075   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.074653   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.575222   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.075440   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:29.575192   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:28.380889   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.880597   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:28.592011   63427 node_ready.go:53] node "no-preload-480205" has status "Ready":"False"
	I1009 20:18:29.091688   63427 node_ready.go:49] node "no-preload-480205" has status "Ready":"True"
	I1009 20:18:29.091710   63427 node_ready.go:38] duration metric: took 7.503746219s for node "no-preload-480205" to be "Ready" ...
	I1009 20:18:29.091719   63427 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:18:29.097050   63427 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101164   63427 pod_ready.go:93] pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.101185   63427 pod_ready.go:82] duration metric: took 4.107489ms for pod "coredns-7c65d6cfc9-dddm2" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.101195   63427 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105318   63427 pod_ready.go:93] pod "etcd-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.105337   63427 pod_ready.go:82] duration metric: took 4.133854ms for pod "etcd-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.105348   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108895   63427 pod_ready.go:93] pod "kube-apiserver-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:29.108910   63427 pod_ready.go:82] duration metric: took 3.556306ms for pod "kube-apiserver-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.108920   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.114777   63427 pod_ready.go:103] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.615669   63427 pod_ready.go:93] pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.615692   63427 pod_ready.go:82] duration metric: took 2.506765342s for pod "kube-controller-manager-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.615703   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620649   63427 pod_ready.go:93] pod "kube-proxy-vbpbk" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.620670   63427 pod_ready.go:82] duration metric: took 4.959968ms for pod "kube-proxy-vbpbk" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.620682   63427 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892060   63427 pod_ready.go:93] pod "kube-scheduler-no-preload-480205" in "kube-system" namespace has status "Ready":"True"
	I1009 20:18:31.892081   63427 pod_ready.go:82] duration metric: took 271.38787ms for pod "kube-scheduler-no-preload-480205" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:31.892089   63427 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	I1009 20:18:29.580683   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:31.581273   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:30.075304   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:30.574688   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.075159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:31.574404   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.074889   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:32.575136   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.074459   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.574779   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.074797   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:34.574832   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:33.380854   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.880599   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.899462   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.397489   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:33.582344   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:36.081582   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:35.074501   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:35.574403   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.075399   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:36.575034   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.074714   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.574446   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.074619   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:38.574644   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.074530   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:39.574700   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:37.881601   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.380041   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.380712   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.397848   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.398202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:42.400630   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:38.582883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:41.080905   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:40.074863   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:40.575174   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.075008   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:41.574859   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.074972   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:42.574851   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.074805   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:43.575033   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.074718   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.575423   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:44.880876   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.881328   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:44.898897   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:47.399335   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:43.581383   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:46.081078   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:48.081422   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:45.074591   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:45.575195   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.075303   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:46.575186   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:46.575288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:46.614320   64287 cri.go:89] found id: ""
	I1009 20:18:46.614343   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.614351   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:46.614357   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:46.614402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:46.646355   64287 cri.go:89] found id: ""
	I1009 20:18:46.646384   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.646395   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:46.646403   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:46.646450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:46.678758   64287 cri.go:89] found id: ""
	I1009 20:18:46.678788   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.678798   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:46.678805   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:46.678859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:46.721469   64287 cri.go:89] found id: ""
	I1009 20:18:46.721496   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.721507   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:46.721514   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:46.721573   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:46.759822   64287 cri.go:89] found id: ""
	I1009 20:18:46.759853   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.759861   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:46.759866   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:46.759923   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:46.798221   64287 cri.go:89] found id: ""
	I1009 20:18:46.798250   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.798261   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:46.798268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:46.798327   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:46.832044   64287 cri.go:89] found id: ""
	I1009 20:18:46.832067   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.832075   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:46.832080   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:46.832143   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:46.865003   64287 cri.go:89] found id: ""
	I1009 20:18:46.865030   64287 logs.go:282] 0 containers: []
	W1009 20:18:46.865041   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:46.865051   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:46.865066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:46.916927   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:46.916964   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:46.930547   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:46.930576   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:47.042476   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:47.042501   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:47.042516   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:47.116701   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:47.116732   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:48.888593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:51.380593   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.899106   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:52.397825   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:50.580775   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:53.081256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:49.659335   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:49.672837   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:49.672906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:49.709722   64287 cri.go:89] found id: ""
	I1009 20:18:49.709750   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.709761   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:49.709769   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:49.709827   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:49.741187   64287 cri.go:89] found id: ""
	I1009 20:18:49.741209   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.741216   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:49.741221   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:49.741278   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:49.782564   64287 cri.go:89] found id: ""
	I1009 20:18:49.782593   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.782603   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:49.782610   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:49.782667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:49.820586   64287 cri.go:89] found id: ""
	I1009 20:18:49.820618   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.820628   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:49.820634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:49.820688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:49.854573   64287 cri.go:89] found id: ""
	I1009 20:18:49.854600   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.854608   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:49.854615   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:49.854672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:49.889947   64287 cri.go:89] found id: ""
	I1009 20:18:49.889976   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.889986   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:49.889993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:49.890049   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:49.925309   64287 cri.go:89] found id: ""
	I1009 20:18:49.925339   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.925350   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:49.925357   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:49.925432   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:49.961993   64287 cri.go:89] found id: ""
	I1009 20:18:49.962019   64287 logs.go:282] 0 containers: []
	W1009 20:18:49.962029   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:49.962039   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:49.962053   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:50.051610   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:50.051642   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:50.092363   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:50.092388   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:50.145606   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:50.145639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:50.160017   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:50.160047   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:50.231984   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:52.733040   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:52.748018   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:52.748075   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:52.789413   64287 cri.go:89] found id: ""
	I1009 20:18:52.789440   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.789452   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:52.789458   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:52.789514   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:52.823188   64287 cri.go:89] found id: ""
	I1009 20:18:52.823219   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.823229   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:52.823237   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:52.823305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:52.858675   64287 cri.go:89] found id: ""
	I1009 20:18:52.858704   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.858716   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:52.858724   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:52.858782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:52.893243   64287 cri.go:89] found id: ""
	I1009 20:18:52.893277   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.893287   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:52.893295   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:52.893363   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:52.928209   64287 cri.go:89] found id: ""
	I1009 20:18:52.928240   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.928248   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:52.928255   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:52.928314   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:52.962418   64287 cri.go:89] found id: ""
	I1009 20:18:52.962446   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.962455   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:52.962461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:52.962510   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:52.996276   64287 cri.go:89] found id: ""
	I1009 20:18:52.996304   64287 logs.go:282] 0 containers: []
	W1009 20:18:52.996315   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:52.996322   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:52.996380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:53.029693   64287 cri.go:89] found id: ""
	I1009 20:18:53.029718   64287 logs.go:282] 0 containers: []
	W1009 20:18:53.029728   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:53.029738   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:53.029752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:53.042690   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:53.042713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:53.114114   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:53.114132   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:53.114143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:53.192280   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:53.192314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:53.230392   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:53.230416   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:53.380621   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.881245   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:54.399437   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:56.900141   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.580802   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:58.082285   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:55.781562   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:55.795951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:55.796017   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:55.836037   64287 cri.go:89] found id: ""
	I1009 20:18:55.836065   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.836074   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:55.836080   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:55.836126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:55.870534   64287 cri.go:89] found id: ""
	I1009 20:18:55.870564   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.870574   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:55.870580   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:55.870647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:55.906415   64287 cri.go:89] found id: ""
	I1009 20:18:55.906438   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.906447   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:55.906454   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:55.906507   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:55.943387   64287 cri.go:89] found id: ""
	I1009 20:18:55.943414   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.943424   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:55.943431   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:55.943489   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:55.977004   64287 cri.go:89] found id: ""
	I1009 20:18:55.977027   64287 logs.go:282] 0 containers: []
	W1009 20:18:55.977036   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:55.977044   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:55.977120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:56.015608   64287 cri.go:89] found id: ""
	I1009 20:18:56.015634   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.015648   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:56.015654   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:56.015703   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:56.049324   64287 cri.go:89] found id: ""
	I1009 20:18:56.049355   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.049366   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:56.049375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:56.049428   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:56.084914   64287 cri.go:89] found id: ""
	I1009 20:18:56.084937   64287 logs.go:282] 0 containers: []
	W1009 20:18:56.084946   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:56.084955   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:56.084975   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:56.098176   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:56.098197   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:56.178386   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:56.178403   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:56.178414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:56.256547   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:56.256582   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:56.294138   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:56.294170   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:58.851568   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:18:58.865845   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:18:58.865902   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:18:58.904144   64287 cri.go:89] found id: ""
	I1009 20:18:58.904169   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.904177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:18:58.904194   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:18:58.904267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:18:58.936739   64287 cri.go:89] found id: ""
	I1009 20:18:58.936769   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.936780   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:18:58.936790   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:18:58.936848   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:18:58.971592   64287 cri.go:89] found id: ""
	I1009 20:18:58.971623   64287 logs.go:282] 0 containers: []
	W1009 20:18:58.971631   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:18:58.971638   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:18:58.971690   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:18:59.007176   64287 cri.go:89] found id: ""
	I1009 20:18:59.007205   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.007228   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:18:59.007234   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:18:59.007283   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:18:59.041760   64287 cri.go:89] found id: ""
	I1009 20:18:59.041789   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.041800   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:18:59.041807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:18:59.041865   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:18:59.077912   64287 cri.go:89] found id: ""
	I1009 20:18:59.077940   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.077951   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:18:59.077958   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:18:59.078014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:18:59.110669   64287 cri.go:89] found id: ""
	I1009 20:18:59.110701   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.110712   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:18:59.110720   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:18:59.110799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:18:59.144869   64287 cri.go:89] found id: ""
	I1009 20:18:59.144897   64287 logs.go:282] 0 containers: []
	W1009 20:18:59.144907   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:18:59.144917   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:18:59.144952   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:18:59.229014   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:18:59.229054   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:18:59.272687   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:18:59.272725   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:18:59.328090   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:18:59.328123   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:18:59.342264   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:18:59.342294   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:18:59.419880   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:18:58.379973   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.381314   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.382266   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:18:59.398378   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.898047   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:00.581003   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:02.581660   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:01.920869   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:01.933620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:01.933685   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:01.967549   64287 cri.go:89] found id: ""
	I1009 20:19:01.967577   64287 logs.go:282] 0 containers: []
	W1009 20:19:01.967585   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:01.967590   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:01.967675   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:02.005465   64287 cri.go:89] found id: ""
	I1009 20:19:02.005491   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.005500   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:02.005505   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:02.005558   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:02.038140   64287 cri.go:89] found id: ""
	I1009 20:19:02.038162   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.038170   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:02.038176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:02.038219   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:02.070394   64287 cri.go:89] found id: ""
	I1009 20:19:02.070423   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.070434   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:02.070442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:02.070505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:02.110634   64287 cri.go:89] found id: ""
	I1009 20:19:02.110655   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.110663   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:02.110669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:02.110723   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:02.166408   64287 cri.go:89] found id: ""
	I1009 20:19:02.166445   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.166457   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:02.166467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:02.166541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:02.218816   64287 cri.go:89] found id: ""
	I1009 20:19:02.218846   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.218856   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:02.218862   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:02.218914   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:02.265090   64287 cri.go:89] found id: ""
	I1009 20:19:02.265118   64287 logs.go:282] 0 containers: []
	W1009 20:19:02.265130   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:02.265140   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:02.265156   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:02.278134   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:02.278160   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:02.348422   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:02.348453   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:02.348467   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:02.429614   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:02.429651   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:02.469100   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:02.469132   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:04.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.881374   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:04.397774   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:06.402923   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.081386   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:07.580670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:05.020914   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:05.034760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:05.034833   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:05.071078   64287 cri.go:89] found id: ""
	I1009 20:19:05.071109   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.071120   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:05.071128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:05.071190   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:05.105517   64287 cri.go:89] found id: ""
	I1009 20:19:05.105545   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.105553   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:05.105558   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:05.105607   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:05.139601   64287 cri.go:89] found id: ""
	I1009 20:19:05.139624   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.139632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:05.139637   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:05.139682   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:05.174329   64287 cri.go:89] found id: ""
	I1009 20:19:05.174351   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.174359   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:05.174365   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:05.174410   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:05.212336   64287 cri.go:89] found id: ""
	I1009 20:19:05.212368   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.212377   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:05.212383   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:05.212464   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:05.251822   64287 cri.go:89] found id: ""
	I1009 20:19:05.251844   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.251851   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:05.251857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:05.251901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:05.291055   64287 cri.go:89] found id: ""
	I1009 20:19:05.291097   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.291106   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:05.291111   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:05.291160   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:05.327223   64287 cri.go:89] found id: ""
	I1009 20:19:05.327248   64287 logs.go:282] 0 containers: []
	W1009 20:19:05.327256   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:05.327266   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:05.327281   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:05.377047   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:05.377086   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:05.391232   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:05.391263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:05.464815   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:05.464837   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:05.464850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:05.542581   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:05.542616   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:08.084504   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:08.100466   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:08.100535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:08.138451   64287 cri.go:89] found id: ""
	I1009 20:19:08.138481   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.138489   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:08.138494   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:08.138551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:08.176839   64287 cri.go:89] found id: ""
	I1009 20:19:08.176867   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.176877   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:08.176884   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:08.176941   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:08.234435   64287 cri.go:89] found id: ""
	I1009 20:19:08.234461   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.234472   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:08.234479   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:08.234544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:08.270727   64287 cri.go:89] found id: ""
	I1009 20:19:08.270753   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.270764   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:08.270771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:08.270831   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:08.305139   64287 cri.go:89] found id: ""
	I1009 20:19:08.305167   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.305177   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:08.305185   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:08.305237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:08.338153   64287 cri.go:89] found id: ""
	I1009 20:19:08.338197   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.338209   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:08.338217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:08.338272   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:08.376046   64287 cri.go:89] found id: ""
	I1009 20:19:08.376073   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.376081   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:08.376087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:08.376144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:08.416555   64287 cri.go:89] found id: ""
	I1009 20:19:08.416595   64287 logs.go:282] 0 containers: []
	W1009 20:19:08.416606   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:08.416617   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:08.416630   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:08.470868   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:08.470898   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:08.486601   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:08.486623   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:08.563325   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:08.563363   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:08.563378   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:08.643743   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:08.643778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:09.380849   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.881773   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:08.898969   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.399277   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:09.580913   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.581693   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:11.197637   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:11.210992   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:11.211078   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:11.248309   64287 cri.go:89] found id: ""
	I1009 20:19:11.248331   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.248339   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:11.248345   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:11.248388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:11.282511   64287 cri.go:89] found id: ""
	I1009 20:19:11.282537   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.282546   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:11.282551   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:11.282603   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:11.319447   64287 cri.go:89] found id: ""
	I1009 20:19:11.319473   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.319480   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:11.319486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:11.319543   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:11.353838   64287 cri.go:89] found id: ""
	I1009 20:19:11.353866   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.353879   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:11.353887   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:11.353951   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:11.395257   64287 cri.go:89] found id: ""
	I1009 20:19:11.395288   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.395300   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:11.395309   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:11.395373   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:11.434406   64287 cri.go:89] found id: ""
	I1009 20:19:11.434430   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.434438   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:11.434445   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:11.434506   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:11.468162   64287 cri.go:89] found id: ""
	I1009 20:19:11.468184   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.468192   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:11.468197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:11.468252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:11.500214   64287 cri.go:89] found id: ""
	I1009 20:19:11.500247   64287 logs.go:282] 0 containers: []
	W1009 20:19:11.500257   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:11.500267   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:11.500282   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:11.566430   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:11.566449   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:11.566463   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:11.642784   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:11.642815   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:11.680882   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:11.680908   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:11.731386   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:11.731414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.245696   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:14.258882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:14.258948   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:14.293339   64287 cri.go:89] found id: ""
	I1009 20:19:14.293365   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.293372   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:14.293379   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:14.293424   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:14.327246   64287 cri.go:89] found id: ""
	I1009 20:19:14.327268   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.327275   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:14.327287   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:14.327334   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:14.366384   64287 cri.go:89] found id: ""
	I1009 20:19:14.366412   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.366423   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:14.366430   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:14.366498   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:14.403913   64287 cri.go:89] found id: ""
	I1009 20:19:14.403950   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.403958   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:14.403965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:14.404021   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:14.442655   64287 cri.go:89] found id: ""
	I1009 20:19:14.442684   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.442694   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:14.442702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:14.442749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:14.477895   64287 cri.go:89] found id: ""
	I1009 20:19:14.477921   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.477928   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:14.477934   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:14.477979   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:14.512833   64287 cri.go:89] found id: ""
	I1009 20:19:14.512871   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.512882   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:14.512889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:14.512955   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:14.546557   64287 cri.go:89] found id: ""
	I1009 20:19:14.546582   64287 logs.go:282] 0 containers: []
	W1009 20:19:14.546590   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:14.546597   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:14.546610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:14.599579   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:14.599610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:14.613347   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:14.613371   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:14.380816   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.879793   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.399353   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:15.899223   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:13.584162   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:16.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.081179   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:14.689272   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:14.689295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:14.689306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:14.770362   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:14.770394   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:17.312105   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:17.326851   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:17.326906   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:17.364760   64287 cri.go:89] found id: ""
	I1009 20:19:17.364785   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.364793   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:17.364799   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:17.364851   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:17.398149   64287 cri.go:89] found id: ""
	I1009 20:19:17.398172   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.398181   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:17.398189   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:17.398247   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:17.432746   64287 cri.go:89] found id: ""
	I1009 20:19:17.432778   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.432789   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:17.432797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:17.432846   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:17.468095   64287 cri.go:89] found id: ""
	I1009 20:19:17.468125   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.468137   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:17.468145   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:17.468206   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:17.503152   64287 cri.go:89] found id: ""
	I1009 20:19:17.503184   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.503196   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:17.503203   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:17.503257   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:17.543966   64287 cri.go:89] found id: ""
	I1009 20:19:17.543993   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.544002   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:17.544008   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:17.544077   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:17.582780   64287 cri.go:89] found id: ""
	I1009 20:19:17.582801   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.582809   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:17.582814   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:17.582860   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:17.621907   64287 cri.go:89] found id: ""
	I1009 20:19:17.621933   64287 logs.go:282] 0 containers: []
	W1009 20:19:17.621942   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:17.621951   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:17.621963   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:17.674239   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:17.674271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:17.688301   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:17.688331   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:17.759965   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:17.759989   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:17.760005   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:17.836052   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:17.836087   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:18.880033   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:21.381550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:18.399116   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.898441   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:22.899243   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.581486   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:23.081145   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:20.380237   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:20.393343   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:20.393409   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:20.427462   64287 cri.go:89] found id: ""
	I1009 20:19:20.427491   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.427501   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:20.427509   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:20.427560   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:20.463708   64287 cri.go:89] found id: ""
	I1009 20:19:20.463736   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.463747   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:20.463754   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:20.463818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:20.497898   64287 cri.go:89] found id: ""
	I1009 20:19:20.497924   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.497931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:20.497937   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:20.497985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:20.531880   64287 cri.go:89] found id: ""
	I1009 20:19:20.531910   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.531918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:20.531923   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:20.531971   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:20.565291   64287 cri.go:89] found id: ""
	I1009 20:19:20.565319   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.565330   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:20.565342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:20.565390   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:20.604786   64287 cri.go:89] found id: ""
	I1009 20:19:20.604815   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.604827   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:20.604835   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:20.604891   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:20.646136   64287 cri.go:89] found id: ""
	I1009 20:19:20.646161   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.646169   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:20.646175   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:20.646231   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:20.687503   64287 cri.go:89] found id: ""
	I1009 20:19:20.687527   64287 logs.go:282] 0 containers: []
	W1009 20:19:20.687540   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:20.687548   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:20.687560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:20.738026   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:20.738057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:20.751432   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:20.751459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:20.826192   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:20.826219   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:20.826239   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:20.905874   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:20.905900   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.445277   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:23.460245   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:23.460305   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:23.503559   64287 cri.go:89] found id: ""
	I1009 20:19:23.503582   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.503590   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:23.503596   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:23.503652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:23.542748   64287 cri.go:89] found id: ""
	I1009 20:19:23.542783   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.542791   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:23.542797   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:23.542857   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:23.585668   64287 cri.go:89] found id: ""
	I1009 20:19:23.585689   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.585696   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:23.585702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:23.585753   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:23.623863   64287 cri.go:89] found id: ""
	I1009 20:19:23.623884   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.623891   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:23.623897   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:23.623952   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:23.657025   64287 cri.go:89] found id: ""
	I1009 20:19:23.657049   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.657057   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:23.657063   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:23.657120   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:23.692536   64287 cri.go:89] found id: ""
	I1009 20:19:23.692573   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.692583   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:23.692590   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:23.692657   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:23.732552   64287 cri.go:89] found id: ""
	I1009 20:19:23.732580   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.732591   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:23.732599   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:23.732645   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:23.767308   64287 cri.go:89] found id: ""
	I1009 20:19:23.767345   64287 logs.go:282] 0 containers: []
	W1009 20:19:23.767356   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:23.767366   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:23.767380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:23.780909   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:23.780948   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:23.853312   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:23.853340   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:23.853355   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:23.934930   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:23.934968   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:23.977906   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:23.977943   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:23.881669   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.380447   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.397833   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.398843   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:25.082071   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:27.580992   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:26.530146   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:26.545527   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:26.545598   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:26.580942   64287 cri.go:89] found id: ""
	I1009 20:19:26.580970   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.580981   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:26.580988   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:26.581050   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:26.621165   64287 cri.go:89] found id: ""
	I1009 20:19:26.621188   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.621195   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:26.621201   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:26.621245   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:26.655664   64287 cri.go:89] found id: ""
	I1009 20:19:26.655690   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.655697   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:26.655703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:26.655749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:26.691951   64287 cri.go:89] found id: ""
	I1009 20:19:26.691973   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.691981   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:26.691987   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:26.692033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:26.728905   64287 cri.go:89] found id: ""
	I1009 20:19:26.728937   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.728948   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:26.728955   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:26.729013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:26.763673   64287 cri.go:89] found id: ""
	I1009 20:19:26.763697   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.763705   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:26.763711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:26.763765   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:26.798507   64287 cri.go:89] found id: ""
	I1009 20:19:26.798535   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.798547   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:26.798554   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:26.798615   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:26.836114   64287 cri.go:89] found id: ""
	I1009 20:19:26.836140   64287 logs.go:282] 0 containers: []
	W1009 20:19:26.836148   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:26.836156   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:26.836169   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:26.914136   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:26.914160   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:26.914175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:26.995023   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:26.995055   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:27.033788   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:27.033817   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:27.084313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:27.084341   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.597899   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:29.611695   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:29.611756   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:28.381564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.881085   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.899697   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.398514   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:30.081670   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:32.580939   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:29.646690   64287 cri.go:89] found id: ""
	I1009 20:19:29.646718   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.646726   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:29.646732   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:29.646780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:29.681379   64287 cri.go:89] found id: ""
	I1009 20:19:29.681408   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.681418   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:29.681425   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:29.681481   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:29.717988   64287 cri.go:89] found id: ""
	I1009 20:19:29.718012   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.718020   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:29.718026   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:29.718076   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:29.752783   64287 cri.go:89] found id: ""
	I1009 20:19:29.752815   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.752825   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:29.752833   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:29.752883   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:29.786079   64287 cri.go:89] found id: ""
	I1009 20:19:29.786105   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.786114   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:29.786120   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:29.786167   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:29.820630   64287 cri.go:89] found id: ""
	I1009 20:19:29.820655   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.820663   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:29.820669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:29.820727   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:29.855992   64287 cri.go:89] found id: ""
	I1009 20:19:29.856022   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.856033   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:29.856040   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:29.856096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:29.891196   64287 cri.go:89] found id: ""
	I1009 20:19:29.891224   64287 logs.go:282] 0 containers: []
	W1009 20:19:29.891234   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:29.891244   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:29.891257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:29.945636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:29.945665   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:29.959715   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:29.959741   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:30.034023   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:30.034046   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:30.034066   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:30.109512   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:30.109545   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.651252   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:32.665196   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:32.665253   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:32.701468   64287 cri.go:89] found id: ""
	I1009 20:19:32.701497   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.701516   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:32.701525   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:32.701581   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:32.740585   64287 cri.go:89] found id: ""
	I1009 20:19:32.740611   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.740623   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:32.740629   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:32.740699   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:32.773765   64287 cri.go:89] found id: ""
	I1009 20:19:32.773792   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.773803   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:32.773810   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:32.773869   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:32.812647   64287 cri.go:89] found id: ""
	I1009 20:19:32.812680   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.812695   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:32.812702   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:32.812752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:32.847044   64287 cri.go:89] found id: ""
	I1009 20:19:32.847092   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.847101   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:32.847107   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:32.847153   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:32.885410   64287 cri.go:89] found id: ""
	I1009 20:19:32.885439   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.885448   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:32.885455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:32.885515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:32.922917   64287 cri.go:89] found id: ""
	I1009 20:19:32.922944   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.922955   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:32.922963   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:32.923026   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:32.958993   64287 cri.go:89] found id: ""
	I1009 20:19:32.959019   64287 logs.go:282] 0 containers: []
	W1009 20:19:32.959027   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:32.959037   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:32.959052   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:32.996844   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:32.996871   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:33.047684   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:33.047715   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:33.061829   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:33.061856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:33.135278   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:33.135302   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:33.135314   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:33.380221   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.380648   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:34.897646   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:36.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.081326   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:37.580347   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:35.722479   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:35.736670   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:35.736745   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:35.778594   64287 cri.go:89] found id: ""
	I1009 20:19:35.778617   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.778625   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:35.778630   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:35.778677   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:35.810906   64287 cri.go:89] found id: ""
	I1009 20:19:35.810934   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.810945   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:35.810954   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:35.811014   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:35.846226   64287 cri.go:89] found id: ""
	I1009 20:19:35.846258   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.846269   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:35.846277   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:35.846325   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:35.880509   64287 cri.go:89] found id: ""
	I1009 20:19:35.880536   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.880547   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:35.880555   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:35.880613   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:35.916039   64287 cri.go:89] found id: ""
	I1009 20:19:35.916067   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.916077   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:35.916085   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:35.916142   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:35.948068   64287 cri.go:89] found id: ""
	I1009 20:19:35.948099   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.948107   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:35.948113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:35.948168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:35.982531   64287 cri.go:89] found id: ""
	I1009 20:19:35.982556   64287 logs.go:282] 0 containers: []
	W1009 20:19:35.982565   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:35.982571   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:35.982618   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:36.016284   64287 cri.go:89] found id: ""
	I1009 20:19:36.016307   64287 logs.go:282] 0 containers: []
	W1009 20:19:36.016314   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:36.016324   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:36.016333   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:36.096773   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:36.096807   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:36.135382   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:36.135408   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:36.189157   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:36.189189   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:36.202243   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:36.202272   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:36.289968   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:38.790894   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:38.804960   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:38.805020   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:38.840867   64287 cri.go:89] found id: ""
	I1009 20:19:38.840891   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.840898   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:38.840904   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:38.840961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:38.877659   64287 cri.go:89] found id: ""
	I1009 20:19:38.877686   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.877695   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:38.877709   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:38.877768   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:38.917914   64287 cri.go:89] found id: ""
	I1009 20:19:38.917938   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.917947   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:38.917954   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:38.918011   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:38.955879   64287 cri.go:89] found id: ""
	I1009 20:19:38.955907   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.955918   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:38.955925   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:38.955985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:38.991683   64287 cri.go:89] found id: ""
	I1009 20:19:38.991712   64287 logs.go:282] 0 containers: []
	W1009 20:19:38.991723   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:38.991730   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:38.991815   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:39.026167   64287 cri.go:89] found id: ""
	I1009 20:19:39.026192   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.026199   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:39.026205   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:39.026273   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:39.061646   64287 cri.go:89] found id: ""
	I1009 20:19:39.061676   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.061692   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:39.061699   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:39.061760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:39.097660   64287 cri.go:89] found id: ""
	I1009 20:19:39.097687   64287 logs.go:282] 0 containers: []
	W1009 20:19:39.097696   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:39.097706   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:39.097720   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:39.149199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:39.149232   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:39.162366   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:39.162391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:39.237267   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:39.237295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:39.237310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:39.320531   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:39.320566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:37.882355   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:40.380792   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.381234   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:38.899362   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.397980   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:39.580565   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:42.081212   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:41.865807   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:41.880948   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:41.881015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:41.917675   64287 cri.go:89] found id: ""
	I1009 20:19:41.917703   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.917714   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:41.917722   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:41.917780   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:41.957152   64287 cri.go:89] found id: ""
	I1009 20:19:41.957180   64287 logs.go:282] 0 containers: []
	W1009 20:19:41.957189   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:41.957194   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:41.957250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:42.008129   64287 cri.go:89] found id: ""
	I1009 20:19:42.008153   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.008162   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:42.008170   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:42.008232   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:42.042628   64287 cri.go:89] found id: ""
	I1009 20:19:42.042651   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.042658   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:42.042669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:42.042712   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:42.080123   64287 cri.go:89] found id: ""
	I1009 20:19:42.080147   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.080155   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:42.080161   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:42.080214   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:42.120070   64287 cri.go:89] found id: ""
	I1009 20:19:42.120099   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.120108   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:42.120114   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:42.120161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:42.153686   64287 cri.go:89] found id: ""
	I1009 20:19:42.153717   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.153727   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:42.153735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:42.153805   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:42.187793   64287 cri.go:89] found id: ""
	I1009 20:19:42.187820   64287 logs.go:282] 0 containers: []
	W1009 20:19:42.187832   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:42.187842   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:42.187856   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:42.267510   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:42.267545   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:42.267559   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:42.348061   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:42.348095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:42.393407   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:42.393431   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:42.448547   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:42.448580   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:44.381312   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:46.881511   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:43.398743   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:45.398982   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.898041   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.581720   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:47.081990   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:44.963603   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:44.977341   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:44.977417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:45.018729   64287 cri.go:89] found id: ""
	I1009 20:19:45.018756   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.018764   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:45.018770   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:45.018821   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:45.055232   64287 cri.go:89] found id: ""
	I1009 20:19:45.055259   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.055267   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:45.055273   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:45.055332   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:45.090575   64287 cri.go:89] found id: ""
	I1009 20:19:45.090604   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.090614   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:45.090620   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:45.090692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:45.126426   64287 cri.go:89] found id: ""
	I1009 20:19:45.126452   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.126459   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:45.126465   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:45.126523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:45.166192   64287 cri.go:89] found id: ""
	I1009 20:19:45.166223   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.166232   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:45.166239   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:45.166301   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:45.200353   64287 cri.go:89] found id: ""
	I1009 20:19:45.200384   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.200400   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:45.200406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:45.200454   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:45.235696   64287 cri.go:89] found id: ""
	I1009 20:19:45.235729   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.235740   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:45.235747   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:45.235807   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:45.271937   64287 cri.go:89] found id: ""
	I1009 20:19:45.271969   64287 logs.go:282] 0 containers: []
	W1009 20:19:45.271979   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:45.271990   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:45.272004   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:45.347600   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:45.347635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:45.392203   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:45.392229   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:45.444012   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:45.444045   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:45.458106   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:45.458130   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:45.540275   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.041410   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:48.057834   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:48.057889   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:48.094318   64287 cri.go:89] found id: ""
	I1009 20:19:48.094346   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.094355   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:48.094362   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:48.094406   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:48.129645   64287 cri.go:89] found id: ""
	I1009 20:19:48.129672   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.129683   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:48.129691   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:48.129743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:48.164423   64287 cri.go:89] found id: ""
	I1009 20:19:48.164446   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.164454   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:48.164460   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:48.164519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:48.197708   64287 cri.go:89] found id: ""
	I1009 20:19:48.197736   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.197745   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:48.197750   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:48.197796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:48.235885   64287 cri.go:89] found id: ""
	I1009 20:19:48.235913   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.235925   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:48.235931   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:48.235995   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:48.272458   64287 cri.go:89] found id: ""
	I1009 20:19:48.272492   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.272504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:48.272513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:48.272580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:48.307152   64287 cri.go:89] found id: ""
	I1009 20:19:48.307180   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.307190   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:48.307197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:48.307255   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:48.347335   64287 cri.go:89] found id: ""
	I1009 20:19:48.347366   64287 logs.go:282] 0 containers: []
	W1009 20:19:48.347376   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:48.347387   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:48.347401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:48.418125   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:48.418161   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:48.433361   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:48.433386   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:48.524863   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:48.524879   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:48.524890   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:48.612196   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:48.612247   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:49.380735   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.381731   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.898962   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.899005   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:49.581882   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.582193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:51.149683   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:51.164603   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:51.164663   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:51.197120   64287 cri.go:89] found id: ""
	I1009 20:19:51.197151   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.197162   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:51.197170   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:51.197228   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:51.233612   64287 cri.go:89] found id: ""
	I1009 20:19:51.233641   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.233651   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:51.233660   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:51.233726   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:51.267119   64287 cri.go:89] found id: ""
	I1009 20:19:51.267150   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.267159   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:51.267168   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:51.267233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:51.301816   64287 cri.go:89] found id: ""
	I1009 20:19:51.301845   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.301854   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:51.301859   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:51.301917   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:51.335483   64287 cri.go:89] found id: ""
	I1009 20:19:51.335524   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.335535   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:51.335543   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:51.335604   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:51.370207   64287 cri.go:89] found id: ""
	I1009 20:19:51.370241   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.370252   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:51.370258   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:51.370320   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:51.406925   64287 cri.go:89] found id: ""
	I1009 20:19:51.406949   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.406956   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:51.406962   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:51.407015   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:51.446354   64287 cri.go:89] found id: ""
	I1009 20:19:51.446378   64287 logs.go:282] 0 containers: []
	W1009 20:19:51.446386   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:51.446394   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:51.446405   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:51.496627   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:51.496657   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:51.509587   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:51.509610   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:51.583276   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:51.583295   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:51.583306   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:51.661552   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:51.661584   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:54.202782   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:54.227761   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:54.227829   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:54.261338   64287 cri.go:89] found id: ""
	I1009 20:19:54.261366   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.261374   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:54.261381   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:54.261435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:54.300387   64287 cri.go:89] found id: ""
	I1009 20:19:54.300414   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.300424   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:54.300429   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:54.300485   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:54.339083   64287 cri.go:89] found id: ""
	I1009 20:19:54.339110   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.339122   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:54.339129   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:54.339180   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:54.374145   64287 cri.go:89] found id: ""
	I1009 20:19:54.374174   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.374182   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:54.374188   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:54.374240   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:54.411872   64287 cri.go:89] found id: ""
	I1009 20:19:54.411904   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.411918   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:54.411926   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:54.411992   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:54.449459   64287 cri.go:89] found id: ""
	I1009 20:19:54.449493   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.449504   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:54.449512   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:54.449575   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:54.482728   64287 cri.go:89] found id: ""
	I1009 20:19:54.482752   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.482762   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:54.482770   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:54.482830   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:54.516220   64287 cri.go:89] found id: ""
	I1009 20:19:54.516252   64287 logs.go:282] 0 containers: []
	W1009 20:19:54.516261   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:54.516270   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:54.516280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:54.569531   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:54.569560   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:54.583371   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:54.583395   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:19:53.880843   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.381025   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.399599   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.399727   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:54.080838   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:56.081451   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	W1009 20:19:54.651718   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:54.651742   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:54.651757   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:54.728869   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:54.728903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.270702   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:19:57.284287   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:19:57.284351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:19:57.317235   64287 cri.go:89] found id: ""
	I1009 20:19:57.317269   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.317279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:19:57.317290   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:19:57.317349   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:19:57.350030   64287 cri.go:89] found id: ""
	I1009 20:19:57.350058   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.350066   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:19:57.350071   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:19:57.350118   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:19:57.382840   64287 cri.go:89] found id: ""
	I1009 20:19:57.382867   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.382877   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:19:57.382884   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:19:57.382935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:19:57.417193   64287 cri.go:89] found id: ""
	I1009 20:19:57.417229   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.417239   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:19:57.417247   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:19:57.417309   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:19:57.456417   64287 cri.go:89] found id: ""
	I1009 20:19:57.456445   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.456454   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:19:57.456461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:19:57.456523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:19:57.490156   64287 cri.go:89] found id: ""
	I1009 20:19:57.490185   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.490193   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:19:57.490199   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:19:57.490246   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:19:57.523983   64287 cri.go:89] found id: ""
	I1009 20:19:57.524013   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.524023   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:19:57.524030   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:19:57.524093   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:19:57.562288   64287 cri.go:89] found id: ""
	I1009 20:19:57.562317   64287 logs.go:282] 0 containers: []
	W1009 20:19:57.562325   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:19:57.562334   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:19:57.562345   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:19:57.602475   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:19:57.602502   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:19:57.656636   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:19:57.656668   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:19:57.670738   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:19:57.670765   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:19:57.742943   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:19:57.742968   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:19:57.742979   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:19:58.384537   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.881670   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.897654   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.899099   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:02.899381   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:19:58.581059   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:01.081778   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:00.321926   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:00.335475   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:00.335546   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:00.369727   64287 cri.go:89] found id: ""
	I1009 20:20:00.369762   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.369770   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:00.369776   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:00.369823   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:00.408917   64287 cri.go:89] found id: ""
	I1009 20:20:00.408943   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.408953   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:00.408964   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:00.409013   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:00.447646   64287 cri.go:89] found id: ""
	I1009 20:20:00.447676   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.447687   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:00.447694   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:00.447754   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:00.485752   64287 cri.go:89] found id: ""
	I1009 20:20:00.485780   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.485790   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:00.485797   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:00.485859   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:00.519568   64287 cri.go:89] found id: ""
	I1009 20:20:00.519592   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.519600   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:00.519606   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:00.519667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:00.553288   64287 cri.go:89] found id: ""
	I1009 20:20:00.553323   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.553334   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:00.553342   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:00.553402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:00.593842   64287 cri.go:89] found id: ""
	I1009 20:20:00.593868   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.593875   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:00.593882   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:00.593938   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:00.630808   64287 cri.go:89] found id: ""
	I1009 20:20:00.630839   64287 logs.go:282] 0 containers: []
	W1009 20:20:00.630849   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:00.630859   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:00.630873   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:00.681858   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:00.681888   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:00.695365   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:00.695391   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:00.768651   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:00.768679   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:00.768693   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:00.843999   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:00.844034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.390483   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:03.405406   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:03.405476   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:03.440025   64287 cri.go:89] found id: ""
	I1009 20:20:03.440048   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.440055   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:03.440061   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:03.440113   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:03.475407   64287 cri.go:89] found id: ""
	I1009 20:20:03.475440   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.475450   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:03.475456   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:03.475511   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:03.512656   64287 cri.go:89] found id: ""
	I1009 20:20:03.512680   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.512688   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:03.512693   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:03.512749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:03.549174   64287 cri.go:89] found id: ""
	I1009 20:20:03.549204   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.549212   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:03.549217   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:03.549282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:03.586093   64287 cri.go:89] found id: ""
	I1009 20:20:03.586118   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.586128   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:03.586135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:03.586201   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:03.624221   64287 cri.go:89] found id: ""
	I1009 20:20:03.624248   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.624258   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:03.624271   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:03.624342   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:03.658759   64287 cri.go:89] found id: ""
	I1009 20:20:03.658781   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.658789   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:03.658794   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:03.658850   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:03.692200   64287 cri.go:89] found id: ""
	I1009 20:20:03.692227   64287 logs.go:282] 0 containers: []
	W1009 20:20:03.692237   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:03.692247   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:03.692263   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:03.745949   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:03.745985   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:03.759691   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:03.759724   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:03.833000   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:03.833020   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:03.833034   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:03.911321   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:03.911352   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:03.381014   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.881096   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:04.900780   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:07.398348   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:03.580442   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:05.582159   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:08.080528   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:06.451158   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:06.466356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:06.466435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:06.502907   64287 cri.go:89] found id: ""
	I1009 20:20:06.502936   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.502944   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:06.502950   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:06.503000   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:06.540938   64287 cri.go:89] found id: ""
	I1009 20:20:06.540961   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.540969   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:06.540974   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:06.541033   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:06.575587   64287 cri.go:89] found id: ""
	I1009 20:20:06.575616   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.575632   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:06.575640   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:06.575696   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:06.611052   64287 cri.go:89] found id: ""
	I1009 20:20:06.611093   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.611103   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:06.611110   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:06.611170   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:06.647763   64287 cri.go:89] found id: ""
	I1009 20:20:06.647793   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.647804   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:06.647811   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:06.647876   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:06.682423   64287 cri.go:89] found id: ""
	I1009 20:20:06.682449   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.682460   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:06.682471   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:06.682541   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:06.718096   64287 cri.go:89] found id: ""
	I1009 20:20:06.718124   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.718135   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:06.718141   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:06.718200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:06.753320   64287 cri.go:89] found id: ""
	I1009 20:20:06.753344   64287 logs.go:282] 0 containers: []
	W1009 20:20:06.753353   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:06.753361   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:06.753375   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:06.809610   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:06.809640   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:06.823651   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:06.823680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:06.895796   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:06.895819   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:06.895833   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:06.972602   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:06.972635   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:09.513909   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:09.527143   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:09.527254   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:09.560406   64287 cri.go:89] found id: ""
	I1009 20:20:09.560432   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.560440   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:09.560445   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:09.560493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:09.600180   64287 cri.go:89] found id: ""
	I1009 20:20:09.600202   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.600219   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:09.600225   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:09.600285   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:08.380652   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.880056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.398968   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:11.897696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:10.081007   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:12.081291   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:09.638356   64287 cri.go:89] found id: ""
	I1009 20:20:09.638383   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.638393   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:09.638398   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:09.638450   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:09.680589   64287 cri.go:89] found id: ""
	I1009 20:20:09.680616   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.680627   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:09.680635   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:09.680686   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:09.719018   64287 cri.go:89] found id: ""
	I1009 20:20:09.719041   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.719049   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:09.719054   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:09.719129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:09.757262   64287 cri.go:89] found id: ""
	I1009 20:20:09.757290   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.757298   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:09.757305   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:09.757364   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:09.796127   64287 cri.go:89] found id: ""
	I1009 20:20:09.796157   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.796168   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:09.796176   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:09.796236   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:09.830650   64287 cri.go:89] found id: ""
	I1009 20:20:09.830679   64287 logs.go:282] 0 containers: []
	W1009 20:20:09.830689   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:09.830699   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:09.830713   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:09.882638   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:09.882666   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:09.897458   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:09.897488   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:09.964440   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:09.964462   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:09.964473   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:10.040103   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:10.040138   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.590159   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:12.603380   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:12.603448   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:12.636246   64287 cri.go:89] found id: ""
	I1009 20:20:12.636272   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.636281   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:12.636288   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:12.636392   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:12.669400   64287 cri.go:89] found id: ""
	I1009 20:20:12.669429   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.669439   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:12.669446   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:12.669493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:12.705076   64287 cri.go:89] found id: ""
	I1009 20:20:12.705104   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.705114   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:12.705122   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:12.705198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:12.738883   64287 cri.go:89] found id: ""
	I1009 20:20:12.738914   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.738926   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:12.738933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:12.738988   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:12.773549   64287 cri.go:89] found id: ""
	I1009 20:20:12.773572   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.773580   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:12.773592   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:12.773709   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:12.813123   64287 cri.go:89] found id: ""
	I1009 20:20:12.813148   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.813156   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:12.813162   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:12.813215   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:12.851272   64287 cri.go:89] found id: ""
	I1009 20:20:12.851305   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.851317   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:12.851325   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:12.851389   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:12.891399   64287 cri.go:89] found id: ""
	I1009 20:20:12.891422   64287 logs.go:282] 0 containers: []
	W1009 20:20:12.891429   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:12.891436   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:12.891455   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:12.945839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:12.945868   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:12.959711   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:12.959735   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:13.028015   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:13.028034   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:13.028048   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:13.108451   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:13.108491   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:12.881443   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.381891   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.398650   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.401925   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:14.580306   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:16.580836   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:15.651166   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:15.664618   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:15.664692   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:15.697088   64287 cri.go:89] found id: ""
	I1009 20:20:15.697117   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.697127   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:15.697137   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:15.697198   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:15.738641   64287 cri.go:89] found id: ""
	I1009 20:20:15.738671   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.738682   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:15.738690   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:15.738747   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:15.771293   64287 cri.go:89] found id: ""
	I1009 20:20:15.771318   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.771326   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:15.771332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:15.771391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:15.804234   64287 cri.go:89] found id: ""
	I1009 20:20:15.804263   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.804271   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:15.804279   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:15.804329   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:15.840914   64287 cri.go:89] found id: ""
	I1009 20:20:15.840964   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.840975   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:15.840983   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:15.841041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:15.878243   64287 cri.go:89] found id: ""
	I1009 20:20:15.878270   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.878280   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:15.878288   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:15.878344   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:15.917371   64287 cri.go:89] found id: ""
	I1009 20:20:15.917398   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.917409   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:15.917416   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:15.917473   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:15.951443   64287 cri.go:89] found id: ""
	I1009 20:20:15.951470   64287 logs.go:282] 0 containers: []
	W1009 20:20:15.951481   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:15.951490   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:15.951504   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:16.017601   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:16.017629   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:16.017643   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:16.095915   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:16.095946   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:16.141704   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:16.141737   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:16.197391   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:16.197424   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:18.712278   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:18.725451   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:18.725513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:18.757618   64287 cri.go:89] found id: ""
	I1009 20:20:18.757640   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.757650   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:18.757657   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:18.757715   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:18.791651   64287 cri.go:89] found id: ""
	I1009 20:20:18.791677   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.791686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:18.791693   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:18.791750   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:18.826402   64287 cri.go:89] found id: ""
	I1009 20:20:18.826430   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.826440   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:18.826449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:18.826522   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:18.868610   64287 cri.go:89] found id: ""
	I1009 20:20:18.868634   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.868644   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:18.868652   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:18.868710   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:18.905499   64287 cri.go:89] found id: ""
	I1009 20:20:18.905520   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.905527   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:18.905532   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:18.905588   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:18.938772   64287 cri.go:89] found id: ""
	I1009 20:20:18.938795   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.938803   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:18.938809   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:18.938855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:18.974712   64287 cri.go:89] found id: ""
	I1009 20:20:18.974742   64287 logs.go:282] 0 containers: []
	W1009 20:20:18.974753   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:18.974760   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:18.974820   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:19.008681   64287 cri.go:89] found id: ""
	I1009 20:20:19.008710   64287 logs.go:282] 0 containers: []
	W1009 20:20:19.008718   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:19.008726   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:19.008736   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:19.059862   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:19.059891   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:19.073071   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:19.073096   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:19.142163   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:19.142189   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:19.142204   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:19.226645   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:19.226691   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:17.880874   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.881056   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.881553   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:18.898733   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:20.899569   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:19.081883   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.581532   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:21.767167   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:21.780448   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:21.780530   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:21.813670   64287 cri.go:89] found id: ""
	I1009 20:20:21.813699   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.813708   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:21.813714   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:21.813760   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:21.850793   64287 cri.go:89] found id: ""
	I1009 20:20:21.850826   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.850838   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:21.850845   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:21.850904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:21.887886   64287 cri.go:89] found id: ""
	I1009 20:20:21.887919   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.887931   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:21.887938   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:21.887987   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:21.926620   64287 cri.go:89] found id: ""
	I1009 20:20:21.926651   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.926661   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:21.926669   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:21.926734   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:21.962822   64287 cri.go:89] found id: ""
	I1009 20:20:21.962859   64287 logs.go:282] 0 containers: []
	W1009 20:20:21.962867   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:21.962872   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:21.962932   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:22.001043   64287 cri.go:89] found id: ""
	I1009 20:20:22.001070   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.001080   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:22.001088   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:22.001145   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:22.034111   64287 cri.go:89] found id: ""
	I1009 20:20:22.034139   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.034148   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:22.034153   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:22.034200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:22.067601   64287 cri.go:89] found id: ""
	I1009 20:20:22.067629   64287 logs.go:282] 0 containers: []
	W1009 20:20:22.067640   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:22.067649   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:22.067663   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:22.081545   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:22.081575   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:22.158725   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:22.158749   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:22.158761   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:22.249086   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:22.249133   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:22.287435   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:22.287462   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:24.380294   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.880564   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:23.398659   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:25.399216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:27.898475   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.080871   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:26.580818   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:24.838935   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:24.852057   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:24.852126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:24.887454   64287 cri.go:89] found id: ""
	I1009 20:20:24.887488   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.887500   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:24.887507   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:24.887565   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:24.928273   64287 cri.go:89] found id: ""
	I1009 20:20:24.928295   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.928303   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:24.928309   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:24.928367   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:24.962116   64287 cri.go:89] found id: ""
	I1009 20:20:24.962152   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.962164   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:24.962172   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:24.962252   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:24.996909   64287 cri.go:89] found id: ""
	I1009 20:20:24.996934   64287 logs.go:282] 0 containers: []
	W1009 20:20:24.996942   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:24.996947   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:24.996996   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:25.030615   64287 cri.go:89] found id: ""
	I1009 20:20:25.030647   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.030658   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:25.030665   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:25.030725   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:25.066069   64287 cri.go:89] found id: ""
	I1009 20:20:25.066096   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.066104   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:25.066109   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:25.066158   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:25.101762   64287 cri.go:89] found id: ""
	I1009 20:20:25.101791   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.101799   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:25.101807   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:25.101854   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:25.139704   64287 cri.go:89] found id: ""
	I1009 20:20:25.139730   64287 logs.go:282] 0 containers: []
	W1009 20:20:25.139738   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:25.139745   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:25.139756   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:25.190212   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:25.190257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:25.206181   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:25.206206   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:25.276523   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:25.276548   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:25.276562   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:25.352477   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:25.352509   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:27.894112   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:27.907965   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:27.908018   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:27.942933   64287 cri.go:89] found id: ""
	I1009 20:20:27.942959   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.942967   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:27.942973   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:27.943029   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:27.995890   64287 cri.go:89] found id: ""
	I1009 20:20:27.995917   64287 logs.go:282] 0 containers: []
	W1009 20:20:27.995929   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:27.995936   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:27.995985   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:28.031877   64287 cri.go:89] found id: ""
	I1009 20:20:28.031904   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.031914   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:28.031922   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:28.031975   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:28.073691   64287 cri.go:89] found id: ""
	I1009 20:20:28.073720   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.073730   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:28.073738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:28.073796   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:28.109946   64287 cri.go:89] found id: ""
	I1009 20:20:28.109975   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.109987   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:28.109995   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:28.110041   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:28.144771   64287 cri.go:89] found id: ""
	I1009 20:20:28.144801   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.144822   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:28.144830   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:28.144892   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:28.179617   64287 cri.go:89] found id: ""
	I1009 20:20:28.179640   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.179647   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:28.179653   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:28.179698   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:28.213734   64287 cri.go:89] found id: ""
	I1009 20:20:28.213759   64287 logs.go:282] 0 containers: []
	W1009 20:20:28.213767   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:28.213775   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:28.213787   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:28.227778   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:28.227803   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:28.298025   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:28.298057   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:28.298071   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:28.378664   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:28.378700   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:28.417577   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:28.417602   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:29.380480   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.382239   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.396952   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:32.399211   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:29.079718   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:31.083332   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:30.968360   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:30.981229   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:30.981295   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:31.013373   64287 cri.go:89] found id: ""
	I1009 20:20:31.013397   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.013408   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:31.013415   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:31.013468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:31.044387   64287 cri.go:89] found id: ""
	I1009 20:20:31.044408   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.044416   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:31.044421   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:31.044490   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:31.079677   64287 cri.go:89] found id: ""
	I1009 20:20:31.079702   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.079718   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:31.079727   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:31.079788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:31.118895   64287 cri.go:89] found id: ""
	I1009 20:20:31.118921   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.118933   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:31.118940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:31.118997   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:31.157008   64287 cri.go:89] found id: ""
	I1009 20:20:31.157035   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.157043   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:31.157049   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:31.157096   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:31.188999   64287 cri.go:89] found id: ""
	I1009 20:20:31.189024   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.189032   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:31.189038   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:31.189095   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:31.225314   64287 cri.go:89] found id: ""
	I1009 20:20:31.225341   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.225351   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:31.225359   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:31.225426   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:31.259864   64287 cri.go:89] found id: ""
	I1009 20:20:31.259891   64287 logs.go:282] 0 containers: []
	W1009 20:20:31.259899   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:31.259907   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:31.259918   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:31.333579   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:31.333615   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:31.375852   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:31.375884   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:31.428346   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:31.428377   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:31.442927   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:31.442951   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:31.512924   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:34.013346   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:34.026671   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:34.026729   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:34.062445   64287 cri.go:89] found id: ""
	I1009 20:20:34.062469   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.062479   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:34.062487   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:34.062586   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:34.096670   64287 cri.go:89] found id: ""
	I1009 20:20:34.096692   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.096699   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:34.096705   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:34.096752   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:34.133653   64287 cri.go:89] found id: ""
	I1009 20:20:34.133682   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.133702   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:34.133711   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:34.133770   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:34.167514   64287 cri.go:89] found id: ""
	I1009 20:20:34.167541   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.167552   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:34.167560   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:34.167631   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:34.200397   64287 cri.go:89] found id: ""
	I1009 20:20:34.200427   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.200438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:34.200446   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:34.200504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:34.236507   64287 cri.go:89] found id: ""
	I1009 20:20:34.236534   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.236544   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:34.236551   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:34.236611   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:34.272611   64287 cri.go:89] found id: ""
	I1009 20:20:34.272639   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.272650   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:34.272658   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:34.272733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:34.311392   64287 cri.go:89] found id: ""
	I1009 20:20:34.311417   64287 logs.go:282] 0 containers: []
	W1009 20:20:34.311426   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:34.311434   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:34.311445   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:34.401718   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:34.401751   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:34.463768   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:34.463798   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:34.526313   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:34.526347   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:34.540370   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:34.540401   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:34.610697   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:33.880836   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:35.881010   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:34.399526   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.401486   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:33.581544   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:36.080875   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.085744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:37.111821   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:37.125012   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:37.125073   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:37.165105   64287 cri.go:89] found id: ""
	I1009 20:20:37.165135   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.165144   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:37.165151   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:37.165217   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:37.201367   64287 cri.go:89] found id: ""
	I1009 20:20:37.201393   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.201403   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:37.201412   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:37.201470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:37.234258   64287 cri.go:89] found id: ""
	I1009 20:20:37.234283   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.234291   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:37.234297   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:37.234351   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:37.270765   64287 cri.go:89] found id: ""
	I1009 20:20:37.270790   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.270798   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:37.270803   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:37.270855   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:37.303931   64287 cri.go:89] found id: ""
	I1009 20:20:37.303962   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.303970   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:37.303976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:37.304035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:37.339438   64287 cri.go:89] found id: ""
	I1009 20:20:37.339466   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.339476   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:37.339484   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:37.339544   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:37.371538   64287 cri.go:89] found id: ""
	I1009 20:20:37.371565   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.371576   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:37.371584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:37.371644   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:37.414729   64287 cri.go:89] found id: ""
	I1009 20:20:37.414775   64287 logs.go:282] 0 containers: []
	W1009 20:20:37.414785   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:37.414803   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:37.414818   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:37.453989   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:37.454013   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:37.504516   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:37.504551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:37.520317   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:37.520353   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:37.590144   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:37.590163   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:37.590175   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:38.381407   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.381518   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:38.897837   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.897916   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.898202   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.581182   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:42.582744   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:40.167604   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:40.191718   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:40.191788   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:40.247439   64287 cri.go:89] found id: ""
	I1009 20:20:40.247467   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.247475   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:40.247482   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:40.247549   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:40.284012   64287 cri.go:89] found id: ""
	I1009 20:20:40.284043   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.284055   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:40.284063   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:40.284124   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:40.321347   64287 cri.go:89] found id: ""
	I1009 20:20:40.321378   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.321386   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:40.321391   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:40.321456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:40.364063   64287 cri.go:89] found id: ""
	I1009 20:20:40.364084   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.364092   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:40.364098   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:40.364152   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:40.400423   64287 cri.go:89] found id: ""
	I1009 20:20:40.400449   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.400458   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:40.400467   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:40.400525   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:40.434538   64287 cri.go:89] found id: ""
	I1009 20:20:40.434567   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.434576   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:40.434584   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:40.434647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:40.468860   64287 cri.go:89] found id: ""
	I1009 20:20:40.468909   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.468921   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:40.468928   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:40.468990   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:40.501583   64287 cri.go:89] found id: ""
	I1009 20:20:40.501607   64287 logs.go:282] 0 containers: []
	W1009 20:20:40.501615   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:40.501624   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:40.501639   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:40.558878   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:40.558919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:40.573191   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:40.573218   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:40.640959   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:40.640980   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:40.640996   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:40.716475   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:40.716510   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.255685   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:43.269113   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:43.269182   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:43.305892   64287 cri.go:89] found id: ""
	I1009 20:20:43.305920   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.305931   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:43.305939   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:43.305999   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:43.341486   64287 cri.go:89] found id: ""
	I1009 20:20:43.341515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.341525   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:43.341532   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:43.341592   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:43.375473   64287 cri.go:89] found id: ""
	I1009 20:20:43.375496   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.375506   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:43.375513   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:43.375577   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:43.411235   64287 cri.go:89] found id: ""
	I1009 20:20:43.411259   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.411268   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:43.411274   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:43.411330   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:43.444884   64287 cri.go:89] found id: ""
	I1009 20:20:43.444914   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.444926   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:43.444933   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:43.444993   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:43.479151   64287 cri.go:89] found id: ""
	I1009 20:20:43.479177   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.479187   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:43.479195   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:43.479261   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:43.512485   64287 cri.go:89] found id: ""
	I1009 20:20:43.512515   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.512523   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:43.512530   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:43.512580   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:43.546511   64287 cri.go:89] found id: ""
	I1009 20:20:43.546533   64287 logs.go:282] 0 containers: []
	W1009 20:20:43.546541   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:43.546549   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:43.546561   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:43.623938   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:43.623970   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:43.667655   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:43.667680   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:43.724747   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:43.724778   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:43.740060   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:43.740081   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:43.820910   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:42.880030   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:44.880596   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.880640   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.399270   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.899013   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:45.081796   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:47.580573   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:46.321796   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:46.337028   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:46.337086   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:46.374564   64287 cri.go:89] found id: ""
	I1009 20:20:46.374587   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.374595   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:46.374601   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:46.374662   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:46.411418   64287 cri.go:89] found id: ""
	I1009 20:20:46.411453   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.411470   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:46.411477   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:46.411535   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:46.447726   64287 cri.go:89] found id: ""
	I1009 20:20:46.447750   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.447758   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:46.447763   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:46.447818   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:46.484691   64287 cri.go:89] found id: ""
	I1009 20:20:46.484721   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.484731   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:46.484738   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:46.484799   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:46.525017   64287 cri.go:89] found id: ""
	I1009 20:20:46.525052   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.525064   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:46.525071   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:46.525129   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:46.562306   64287 cri.go:89] found id: ""
	I1009 20:20:46.562334   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.562342   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:46.562350   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:46.562417   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:46.598067   64287 cri.go:89] found id: ""
	I1009 20:20:46.598099   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.598110   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:46.598117   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:46.598179   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:46.639484   64287 cri.go:89] found id: ""
	I1009 20:20:46.639515   64287 logs.go:282] 0 containers: []
	W1009 20:20:46.639526   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:46.639537   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:46.639551   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:46.694106   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:46.694140   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:46.709475   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:46.709501   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:46.781281   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:46.781308   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:46.781322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:46.862224   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:46.862262   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:49.402786   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:49.417432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:49.417537   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:49.454253   64287 cri.go:89] found id: ""
	I1009 20:20:49.454286   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.454296   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:49.454305   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:49.454366   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:49.490198   64287 cri.go:89] found id: ""
	I1009 20:20:49.490223   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.490234   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:49.490241   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:49.490307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:49.524286   64287 cri.go:89] found id: ""
	I1009 20:20:49.524312   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.524322   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:49.524330   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:49.524388   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:49.566415   64287 cri.go:89] found id: ""
	I1009 20:20:49.566444   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.566455   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:49.566462   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:49.566529   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:49.604306   64287 cri.go:89] found id: ""
	I1009 20:20:49.604335   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.604346   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:49.604353   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:49.604414   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:48.880756   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:51.381546   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:50.398989   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.399159   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.581256   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:52.081420   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:49.638514   64287 cri.go:89] found id: ""
	I1009 20:20:49.638543   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.638560   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:49.638568   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:49.638630   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:49.672158   64287 cri.go:89] found id: ""
	I1009 20:20:49.672182   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.672191   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:49.672197   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:49.672250   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:49.709865   64287 cri.go:89] found id: ""
	I1009 20:20:49.709887   64287 logs.go:282] 0 containers: []
	W1009 20:20:49.709897   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:49.709907   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:49.709919   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:49.762184   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:49.762220   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:49.775852   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:49.775880   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:49.850309   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:49.850329   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:49.850343   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:49.930225   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:49.930266   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:52.470580   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:52.484087   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:52.484141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:52.517440   64287 cri.go:89] found id: ""
	I1009 20:20:52.517461   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.517469   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:52.517475   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:52.517519   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:52.550340   64287 cri.go:89] found id: ""
	I1009 20:20:52.550380   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.550392   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:52.550399   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:52.550468   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:52.586444   64287 cri.go:89] found id: ""
	I1009 20:20:52.586478   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.586488   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:52.586495   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:52.586551   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:52.620461   64287 cri.go:89] found id: ""
	I1009 20:20:52.620488   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.620499   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:52.620506   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:52.620566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:52.656032   64287 cri.go:89] found id: ""
	I1009 20:20:52.656063   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.656074   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:52.656082   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:52.656144   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:52.687083   64287 cri.go:89] found id: ""
	I1009 20:20:52.687110   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.687118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:52.687124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:52.687187   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:52.723413   64287 cri.go:89] found id: ""
	I1009 20:20:52.723442   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.723453   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:52.723461   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:52.723521   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:52.754656   64287 cri.go:89] found id: ""
	I1009 20:20:52.754687   64287 logs.go:282] 0 containers: []
	W1009 20:20:52.754698   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:52.754709   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:52.754721   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:52.807359   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:52.807398   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:52.821469   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:52.821500   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:52.893447   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:52.893470   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:52.893484   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:52.970051   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:52.970083   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:53.880365   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.881762   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.898472   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:57.397863   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:54.580495   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:56.581092   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:55.508078   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:55.521951   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:55.522012   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:55.556291   64287 cri.go:89] found id: ""
	I1009 20:20:55.556316   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.556324   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:55.556329   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:55.556380   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:55.591032   64287 cri.go:89] found id: ""
	I1009 20:20:55.591059   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.591079   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:55.591086   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:55.591141   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:55.636196   64287 cri.go:89] found id: ""
	I1009 20:20:55.636228   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.636239   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:55.636246   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:55.636310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:55.673291   64287 cri.go:89] found id: ""
	I1009 20:20:55.673313   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.673321   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:55.673327   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:55.673374   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:55.709457   64287 cri.go:89] found id: ""
	I1009 20:20:55.709486   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.709497   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:55.709504   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:55.709563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:55.748391   64287 cri.go:89] found id: ""
	I1009 20:20:55.748423   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.748434   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:55.748442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:55.748503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:55.780581   64287 cri.go:89] found id: ""
	I1009 20:20:55.780610   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.780620   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:55.780627   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:55.780688   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:55.816489   64287 cri.go:89] found id: ""
	I1009 20:20:55.816527   64287 logs.go:282] 0 containers: []
	W1009 20:20:55.816535   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:55.816554   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:55.816568   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:55.871679   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:55.871708   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:55.887895   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:55.887920   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:55.956814   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:55.956838   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:55.956850   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:56.031453   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:56.031489   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.569098   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:20:58.583558   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:20:58.583626   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:20:58.622296   64287 cri.go:89] found id: ""
	I1009 20:20:58.622326   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.622334   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:20:58.622340   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:20:58.622401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:20:58.663776   64287 cri.go:89] found id: ""
	I1009 20:20:58.663798   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.663806   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:20:58.663812   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:20:58.663858   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:20:58.699968   64287 cri.go:89] found id: ""
	I1009 20:20:58.699994   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.700002   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:20:58.700007   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:20:58.700066   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:20:58.733935   64287 cri.go:89] found id: ""
	I1009 20:20:58.733959   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.733968   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:20:58.733974   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:20:58.734030   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:20:58.768723   64287 cri.go:89] found id: ""
	I1009 20:20:58.768752   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.768763   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:20:58.768771   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:20:58.768834   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:20:58.803129   64287 cri.go:89] found id: ""
	I1009 20:20:58.803153   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.803161   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:20:58.803166   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:20:58.803237   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:20:58.836341   64287 cri.go:89] found id: ""
	I1009 20:20:58.836366   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.836374   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:20:58.836379   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:20:58.836437   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:20:58.872048   64287 cri.go:89] found id: ""
	I1009 20:20:58.872071   64287 logs.go:282] 0 containers: []
	W1009 20:20:58.872081   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:20:58.872091   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:20:58.872106   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:20:58.950133   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:20:58.950167   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:20:58.988529   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:20:58.988555   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:20:59.038377   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:20:59.038414   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:20:59.053398   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:20:59.053448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:20:59.120793   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:20:58.380051   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:00.380182   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:59.398592   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.898382   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:20:58.581266   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.081525   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:01.621691   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:01.634505   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:01.634563   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:01.670785   64287 cri.go:89] found id: ""
	I1009 20:21:01.670818   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.670826   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:01.670833   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:01.670897   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:01.712219   64287 cri.go:89] found id: ""
	I1009 20:21:01.712243   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.712255   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:01.712261   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:01.712307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:01.747175   64287 cri.go:89] found id: ""
	I1009 20:21:01.747204   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.747215   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:01.747222   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:01.747282   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:01.785359   64287 cri.go:89] found id: ""
	I1009 20:21:01.785382   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.785389   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:01.785396   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:01.785452   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:01.822385   64287 cri.go:89] found id: ""
	I1009 20:21:01.822415   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.822426   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:01.822433   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:01.822501   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:01.860839   64287 cri.go:89] found id: ""
	I1009 20:21:01.860871   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.860880   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:01.860889   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:01.860935   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:01.899191   64287 cri.go:89] found id: ""
	I1009 20:21:01.899215   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.899224   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:01.899232   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:01.899288   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:01.936692   64287 cri.go:89] found id: ""
	I1009 20:21:01.936721   64287 logs.go:282] 0 containers: []
	W1009 20:21:01.936729   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:01.936737   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:01.936748   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:02.014848   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:02.014883   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:02.058815   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:02.058846   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:02.110513   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:02.110543   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:02.123855   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:02.123878   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:02.193997   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:02.880277   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.881247   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:07.380330   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.899214   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.398320   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:03.580574   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:06.080382   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.081294   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:04.694766   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:04.707675   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:04.707743   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:04.741322   64287 cri.go:89] found id: ""
	I1009 20:21:04.741354   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.741365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:04.741374   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:04.741435   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:04.780649   64287 cri.go:89] found id: ""
	I1009 20:21:04.780676   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.780686   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:04.780694   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:04.780749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:04.817514   64287 cri.go:89] found id: ""
	I1009 20:21:04.817545   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.817557   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:04.817564   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:04.817672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:04.850848   64287 cri.go:89] found id: ""
	I1009 20:21:04.850871   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.850878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:04.850885   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:04.850942   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:04.885390   64287 cri.go:89] found id: ""
	I1009 20:21:04.885426   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.885438   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:04.885449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:04.885513   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:04.920199   64287 cri.go:89] found id: ""
	I1009 20:21:04.920221   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.920229   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:04.920235   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:04.920307   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:04.954597   64287 cri.go:89] found id: ""
	I1009 20:21:04.954619   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.954627   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:04.954634   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:04.954693   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:04.988236   64287 cri.go:89] found id: ""
	I1009 20:21:04.988262   64287 logs.go:282] 0 containers: []
	W1009 20:21:04.988270   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:04.988278   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:04.988289   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:05.039909   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:05.039939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:05.053556   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:05.053583   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:05.126596   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:05.126618   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:05.126628   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:05.202275   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:05.202309   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:07.740836   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:07.754095   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:07.754165   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:07.786584   64287 cri.go:89] found id: ""
	I1009 20:21:07.786613   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.786621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:07.786627   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:07.786672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:07.822365   64287 cri.go:89] found id: ""
	I1009 20:21:07.822388   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.822396   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:07.822410   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:07.822456   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:07.858058   64287 cri.go:89] found id: ""
	I1009 20:21:07.858083   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.858093   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:07.858100   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:07.858156   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:07.894319   64287 cri.go:89] found id: ""
	I1009 20:21:07.894345   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.894352   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:07.894358   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:07.894422   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:07.928620   64287 cri.go:89] found id: ""
	I1009 20:21:07.928648   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.928659   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:07.928667   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:07.928724   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:07.964923   64287 cri.go:89] found id: ""
	I1009 20:21:07.964956   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.964967   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:07.964976   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:07.965035   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:07.998308   64287 cri.go:89] found id: ""
	I1009 20:21:07.998336   64287 logs.go:282] 0 containers: []
	W1009 20:21:07.998347   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:07.998354   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:07.998402   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:08.032021   64287 cri.go:89] found id: ""
	I1009 20:21:08.032047   64287 logs.go:282] 0 containers: []
	W1009 20:21:08.032059   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:08.032070   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:08.032084   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:08.103843   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:08.103867   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:08.103882   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:08.185476   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:08.185507   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:08.226967   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:08.226994   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:08.304852   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:08.304887   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:09.389127   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:11.880856   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:08.399153   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.399356   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:12.897624   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.581193   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:13.082124   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:10.819345   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:10.832902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:10.832963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:10.873237   64287 cri.go:89] found id: ""
	I1009 20:21:10.873268   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.873279   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:10.873286   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:10.873350   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:10.907296   64287 cri.go:89] found id: ""
	I1009 20:21:10.907316   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.907324   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:10.907329   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:10.907377   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:10.946428   64287 cri.go:89] found id: ""
	I1009 20:21:10.946469   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.946481   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:10.946487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:10.946540   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:10.982175   64287 cri.go:89] found id: ""
	I1009 20:21:10.982199   64287 logs.go:282] 0 containers: []
	W1009 20:21:10.982207   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:10.982212   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:10.982259   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:11.016197   64287 cri.go:89] found id: ""
	I1009 20:21:11.016220   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.016243   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:11.016250   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:11.016318   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:11.055697   64287 cri.go:89] found id: ""
	I1009 20:21:11.055723   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.055732   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:11.055740   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:11.055806   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:11.093444   64287 cri.go:89] found id: ""
	I1009 20:21:11.093469   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.093480   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:11.093487   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:11.093548   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:11.133224   64287 cri.go:89] found id: ""
	I1009 20:21:11.133252   64287 logs.go:282] 0 containers: []
	W1009 20:21:11.133266   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:11.133276   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:11.133291   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:11.189020   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:11.189057   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:11.202652   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:11.202682   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:11.272789   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:11.272811   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:11.272824   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:11.354868   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:11.354904   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:13.896655   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:13.910126   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:13.910189   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:13.944472   64287 cri.go:89] found id: ""
	I1009 20:21:13.944497   64287 logs.go:282] 0 containers: []
	W1009 20:21:13.944505   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:13.944511   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:13.944566   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:14.003362   64287 cri.go:89] found id: ""
	I1009 20:21:14.003387   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.003397   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:14.003407   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:14.003470   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:14.037691   64287 cri.go:89] found id: ""
	I1009 20:21:14.037717   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.037726   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:14.037732   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:14.037792   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:14.079333   64287 cri.go:89] found id: ""
	I1009 20:21:14.079358   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.079368   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:14.079375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:14.079433   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:14.120821   64287 cri.go:89] found id: ""
	I1009 20:21:14.120843   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.120851   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:14.120857   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:14.120904   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:14.161089   64287 cri.go:89] found id: ""
	I1009 20:21:14.161118   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.161128   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:14.161135   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:14.161193   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:14.201711   64287 cri.go:89] found id: ""
	I1009 20:21:14.201739   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.201748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:14.201756   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:14.201814   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:14.238469   64287 cri.go:89] found id: ""
	I1009 20:21:14.238502   64287 logs.go:282] 0 containers: []
	W1009 20:21:14.238512   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:14.238520   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:14.238531   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:14.289786   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:14.289821   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:14.303876   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:14.303903   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:14.376426   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:14.376446   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:14.376459   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:14.458058   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:14.458095   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:14.381278   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:16.381782   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:14.899834   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.398309   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:15.580946   63744 pod_ready.go:103] pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:17.574819   63744 pod_ready.go:82] duration metric: took 4m0.000292386s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:17.574851   63744 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-6z7jj" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:17.574882   63744 pod_ready.go:39] duration metric: took 4m14.424118915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:17.574914   63744 kubeadm.go:597] duration metric: took 4m22.465328757s to restartPrimaryControlPlane
	W1009 20:21:17.574982   63744 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:17.575016   63744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:17.000623   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:17.015890   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:17.015963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:17.054136   64287 cri.go:89] found id: ""
	I1009 20:21:17.054166   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.054177   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:17.054185   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:17.054242   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:17.089501   64287 cri.go:89] found id: ""
	I1009 20:21:17.089538   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.089548   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:17.089556   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:17.089614   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:17.128042   64287 cri.go:89] found id: ""
	I1009 20:21:17.128066   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.128073   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:17.128079   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:17.128126   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:17.164663   64287 cri.go:89] found id: ""
	I1009 20:21:17.164689   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.164697   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:17.164703   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:17.164766   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:17.200865   64287 cri.go:89] found id: ""
	I1009 20:21:17.200891   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.200899   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:17.200906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:17.200963   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:17.241649   64287 cri.go:89] found id: ""
	I1009 20:21:17.241675   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.241683   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:17.241690   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:17.241749   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:17.277390   64287 cri.go:89] found id: ""
	I1009 20:21:17.277424   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.277436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:17.277449   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:17.277515   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:17.316942   64287 cri.go:89] found id: ""
	I1009 20:21:17.316973   64287 logs.go:282] 0 containers: []
	W1009 20:21:17.316985   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:17.316995   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:17.317015   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:17.360293   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:17.360322   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:17.413510   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:17.413546   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:17.427280   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:17.427310   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:17.509531   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:17.509551   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:17.509566   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:18.880550   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.881023   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:19.398723   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:21.899259   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:20.092463   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:20.106101   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:20.106168   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:20.147889   64287 cri.go:89] found id: ""
	I1009 20:21:20.147916   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.147925   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:20.147931   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:20.147980   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:20.183097   64287 cri.go:89] found id: ""
	I1009 20:21:20.183167   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.183179   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:20.183185   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:20.183233   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:20.217556   64287 cri.go:89] found id: ""
	I1009 20:21:20.217585   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.217596   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:20.217604   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:20.217661   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:20.256692   64287 cri.go:89] found id: ""
	I1009 20:21:20.256717   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.256728   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:20.256735   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:20.256797   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:20.290866   64287 cri.go:89] found id: ""
	I1009 20:21:20.290888   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.290896   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:20.290902   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:20.290954   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:20.326802   64287 cri.go:89] found id: ""
	I1009 20:21:20.326828   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.326836   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:20.326842   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:20.326901   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:20.362395   64287 cri.go:89] found id: ""
	I1009 20:21:20.362426   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.362436   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:20.362442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:20.362504   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:20.408354   64287 cri.go:89] found id: ""
	I1009 20:21:20.408381   64287 logs.go:282] 0 containers: []
	W1009 20:21:20.408391   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:20.408400   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:20.408415   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:20.426669   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:20.426694   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:20.525895   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:20.525927   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:20.525939   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:20.612620   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:20.612654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:20.653152   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:20.653179   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.205516   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:23.218432   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:23.218493   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:23.254327   64287 cri.go:89] found id: ""
	I1009 20:21:23.254355   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.254365   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:23.254372   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:23.254429   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:23.295411   64287 cri.go:89] found id: ""
	I1009 20:21:23.295437   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.295448   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:23.295463   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:23.295523   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:23.331631   64287 cri.go:89] found id: ""
	I1009 20:21:23.331661   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.331672   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:23.331679   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:23.331742   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:23.366114   64287 cri.go:89] found id: ""
	I1009 20:21:23.366139   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.366147   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:23.366152   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:23.366200   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:23.403549   64287 cri.go:89] found id: ""
	I1009 20:21:23.403580   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.403587   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:23.403593   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:23.403652   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:23.439231   64287 cri.go:89] found id: ""
	I1009 20:21:23.439254   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.439263   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:23.439268   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:23.439322   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:23.473417   64287 cri.go:89] found id: ""
	I1009 20:21:23.473441   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.473449   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:23.473455   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:23.473503   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:23.506129   64287 cri.go:89] found id: ""
	I1009 20:21:23.506151   64287 logs.go:282] 0 containers: []
	W1009 20:21:23.506159   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:23.506166   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:23.506176   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:23.546813   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:23.546836   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:23.599317   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:23.599346   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:23.612400   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:23.612426   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:23.684905   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:23.684924   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:23.684936   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:22.881084   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:25.380780   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:27.380875   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:23.899699   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.401044   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:26.267079   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:26.282873   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:26.282946   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:26.319632   64287 cri.go:89] found id: ""
	I1009 20:21:26.319657   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.319665   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:26.319671   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:26.319716   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:26.362263   64287 cri.go:89] found id: ""
	I1009 20:21:26.362290   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.362299   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:26.362306   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:26.362401   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:26.412274   64287 cri.go:89] found id: ""
	I1009 20:21:26.412309   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.412320   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:26.412332   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:26.412391   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:26.446754   64287 cri.go:89] found id: ""
	I1009 20:21:26.446774   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.446783   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:26.446788   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:26.446838   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:26.480333   64287 cri.go:89] found id: ""
	I1009 20:21:26.480359   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.480367   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:26.480375   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:26.480438   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:26.518440   64287 cri.go:89] found id: ""
	I1009 20:21:26.518469   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.518479   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:26.518486   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:26.518555   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:26.555100   64287 cri.go:89] found id: ""
	I1009 20:21:26.555127   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.555138   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:26.555146   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:26.555208   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:26.594515   64287 cri.go:89] found id: ""
	I1009 20:21:26.594538   64287 logs.go:282] 0 containers: []
	W1009 20:21:26.594550   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:26.594559   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:26.594573   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:26.647465   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:26.647511   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:26.661021   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:26.661042   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:26.732233   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:26.732265   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:26.732286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:26.813104   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:26.813143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:29.361485   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:29.374578   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:29.374647   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:29.409740   64287 cri.go:89] found id: ""
	I1009 20:21:29.409766   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.409774   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:29.409781   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:29.409826   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:29.443932   64287 cri.go:89] found id: ""
	I1009 20:21:29.443959   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.443970   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:29.443978   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:29.444070   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:29.485900   64287 cri.go:89] found id: ""
	I1009 20:21:29.485927   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.485935   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:29.485940   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:29.485994   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:29.527976   64287 cri.go:89] found id: ""
	I1009 20:21:29.528002   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.528013   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:29.528021   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:29.528080   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:29.572186   64287 cri.go:89] found id: ""
	I1009 20:21:29.572214   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.572235   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:29.572243   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:29.572310   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:29.612166   64287 cri.go:89] found id: ""
	I1009 20:21:29.612190   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.612200   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:29.612208   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:29.612267   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:29.880828   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:32.380494   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:28.897535   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:31.398369   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:29.646269   64287 cri.go:89] found id: ""
	I1009 20:21:29.646294   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.646312   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:29.646319   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:29.646375   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:29.680624   64287 cri.go:89] found id: ""
	I1009 20:21:29.680649   64287 logs.go:282] 0 containers: []
	W1009 20:21:29.680656   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:29.680663   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:29.680673   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:29.729251   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:29.729278   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:29.742746   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:29.742773   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:29.815128   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:29.815150   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:29.815164   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:29.893418   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:29.893448   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.433532   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:32.447090   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:32.447161   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:32.482662   64287 cri.go:89] found id: ""
	I1009 20:21:32.482688   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.482696   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:32.482702   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:32.482755   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:32.521292   64287 cri.go:89] found id: ""
	I1009 20:21:32.521321   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.521329   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:32.521337   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:32.521393   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:32.555868   64287 cri.go:89] found id: ""
	I1009 20:21:32.555894   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.555901   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:32.555906   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:32.555956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:32.593541   64287 cri.go:89] found id: ""
	I1009 20:21:32.593563   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.593570   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:32.593575   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:32.593632   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:32.627712   64287 cri.go:89] found id: ""
	I1009 20:21:32.627740   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.627751   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:32.627758   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:32.627816   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:32.660632   64287 cri.go:89] found id: ""
	I1009 20:21:32.660658   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.660669   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:32.660677   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:32.660733   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:32.697709   64287 cri.go:89] found id: ""
	I1009 20:21:32.697737   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.697748   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:32.697755   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:32.697810   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:32.734782   64287 cri.go:89] found id: ""
	I1009 20:21:32.734806   64287 logs.go:282] 0 containers: []
	W1009 20:21:32.734816   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:32.734827   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:32.734840   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:32.809239   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:32.809271   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:32.857109   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:32.857143   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:32.915156   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:32.915185   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:32.929782   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:32.929813   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:32.996321   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:34.380798   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:36.880717   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:33.399188   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.899631   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:35.497013   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:35.510645   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:35.510714   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:35.543840   64287 cri.go:89] found id: ""
	I1009 20:21:35.543869   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.543878   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:35.543883   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:35.543929   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:35.579206   64287 cri.go:89] found id: ""
	I1009 20:21:35.579235   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.579246   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:35.579254   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:35.579312   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:35.613362   64287 cri.go:89] found id: ""
	I1009 20:21:35.613393   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.613406   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:35.613414   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:35.613484   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:35.649553   64287 cri.go:89] found id: ""
	I1009 20:21:35.649584   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.649596   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:35.649605   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:35.649672   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:35.688665   64287 cri.go:89] found id: ""
	I1009 20:21:35.688695   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.688706   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:35.688714   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:35.688771   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:35.725958   64287 cri.go:89] found id: ""
	I1009 20:21:35.725979   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.725987   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:35.725993   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:35.726047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:35.758368   64287 cri.go:89] found id: ""
	I1009 20:21:35.758395   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.758405   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:35.758410   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:35.758455   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:35.790323   64287 cri.go:89] found id: ""
	I1009 20:21:35.790347   64287 logs.go:282] 0 containers: []
	W1009 20:21:35.790357   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:35.790367   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:35.790380   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:35.843721   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:35.843752   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:35.858894   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:35.858915   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:35.934242   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:35.934261   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:35.934273   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:36.016029   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:36.016062   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.554219   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:38.567266   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:38.567339   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:38.606292   64287 cri.go:89] found id: ""
	I1009 20:21:38.606328   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.606338   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:38.606344   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:38.606396   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:38.638807   64287 cri.go:89] found id: ""
	I1009 20:21:38.638831   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.638841   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:38.638849   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:38.638907   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:38.677635   64287 cri.go:89] found id: ""
	I1009 20:21:38.677665   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.677674   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:38.677682   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:38.677740   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:38.714847   64287 cri.go:89] found id: ""
	I1009 20:21:38.714870   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.714878   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:38.714886   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:38.714944   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:38.746460   64287 cri.go:89] found id: ""
	I1009 20:21:38.746487   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.746495   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:38.746501   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:38.746554   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:38.782027   64287 cri.go:89] found id: ""
	I1009 20:21:38.782055   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.782066   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:38.782073   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:38.782130   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:38.816859   64287 cri.go:89] found id: ""
	I1009 20:21:38.816885   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.816893   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:38.816899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:38.816961   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:38.857159   64287 cri.go:89] found id: ""
	I1009 20:21:38.857195   64287 logs.go:282] 0 containers: []
	W1009 20:21:38.857204   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:38.857212   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:38.857224   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:38.913209   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:38.913240   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:38.927593   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:38.927617   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:38.998178   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:38.998213   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:38.998226   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:39.080681   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:39.080716   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:38.882054   64109 pod_ready.go:103] pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.874981   64109 pod_ready.go:82] duration metric: took 4m0.000684397s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" ...
	E1009 20:21:40.875008   64109 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-8p24l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1009 20:21:40.875024   64109 pod_ready.go:39] duration metric: took 4m13.532570346s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:40.875056   64109 kubeadm.go:597] duration metric: took 4m22.188345085s to restartPrimaryControlPlane
	W1009 20:21:40.875130   64109 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:40.875162   64109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:38.397606   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:40.398216   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:42.398390   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:41.620092   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:41.633491   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:41.633564   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:41.671087   64287 cri.go:89] found id: ""
	I1009 20:21:41.671114   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.671123   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:41.671128   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:41.671184   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:41.706940   64287 cri.go:89] found id: ""
	I1009 20:21:41.706966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.706976   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:41.706984   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:41.707036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:41.745612   64287 cri.go:89] found id: ""
	I1009 20:21:41.745637   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.745646   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:41.745651   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:41.745706   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:41.786857   64287 cri.go:89] found id: ""
	I1009 20:21:41.786884   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.786895   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:41.786904   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:41.786958   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:41.825005   64287 cri.go:89] found id: ""
	I1009 20:21:41.825030   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.825041   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:41.825053   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:41.825100   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:41.863089   64287 cri.go:89] found id: ""
	I1009 20:21:41.863111   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.863118   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:41.863124   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:41.863169   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:41.907937   64287 cri.go:89] found id: ""
	I1009 20:21:41.907966   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.907980   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:41.907988   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:41.908047   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:41.948189   64287 cri.go:89] found id: ""
	I1009 20:21:41.948219   64287 logs.go:282] 0 containers: []
	W1009 20:21:41.948229   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:41.948243   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:41.948257   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:41.993008   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:41.993038   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:42.045831   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:42.045864   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:42.060255   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:42.060280   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:42.127657   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:42.127680   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:42.127696   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:44.398696   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:46.399642   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:43.855161   63744 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.280119061s)
	I1009 20:21:43.855245   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:43.871587   63744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:43.881677   63744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:43.891625   63744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:43.891646   63744 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:43.891689   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:43.901651   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:43.901705   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:43.911179   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:43.920389   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:43.920436   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:43.929812   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.938937   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:43.938989   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:43.948454   63744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:43.958881   63744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:43.958924   63744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:43.970036   63744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:44.024453   63744 kubeadm.go:310] W1009 20:21:44.000704    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.025829   63744 kubeadm.go:310] W1009 20:21:44.002227    2540 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:21:44.142191   63744 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:44.713209   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:44.725754   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:21:44.725825   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:21:44.760976   64287 cri.go:89] found id: ""
	I1009 20:21:44.760997   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.761004   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:21:44.761011   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:21:44.761053   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:21:44.796955   64287 cri.go:89] found id: ""
	I1009 20:21:44.796977   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.796985   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:21:44.796991   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:21:44.797036   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:21:44.832558   64287 cri.go:89] found id: ""
	I1009 20:21:44.832590   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.832601   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:21:44.832608   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:21:44.832667   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:21:44.867869   64287 cri.go:89] found id: ""
	I1009 20:21:44.867898   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.867908   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:21:44.867916   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:21:44.867966   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:21:44.901395   64287 cri.go:89] found id: ""
	I1009 20:21:44.901423   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.901434   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:21:44.901442   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:21:44.901505   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:21:44.939276   64287 cri.go:89] found id: ""
	I1009 20:21:44.939310   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.939323   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:21:44.939337   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:21:44.939399   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:21:44.973692   64287 cri.go:89] found id: ""
	I1009 20:21:44.973719   64287 logs.go:282] 0 containers: []
	W1009 20:21:44.973728   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:21:44.973734   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:21:44.973782   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:21:45.007406   64287 cri.go:89] found id: ""
	I1009 20:21:45.007436   64287 logs.go:282] 0 containers: []
	W1009 20:21:45.007446   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:21:45.007457   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:21:45.007472   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:21:45.062199   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:21:45.062233   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:21:45.075739   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:21:45.075763   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:21:45.147623   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:21:45.147639   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:21:45.147654   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:21:45.229252   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:21:45.229286   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:21:47.777208   64287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:21:47.794054   64287 kubeadm.go:597] duration metric: took 4m2.743382732s to restartPrimaryControlPlane
	W1009 20:21:47.794132   64287 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1009 20:21:47.794159   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:21:48.789863   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:21:48.804981   64287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:21:48.815981   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:21:48.826318   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:21:48.826340   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:21:48.826390   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:21:48.838918   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:21:48.838976   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:21:48.851635   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:21:48.864173   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:21:48.864237   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:21:48.874606   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.885036   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:21:48.885097   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:21:48.894870   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:21:48.904993   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:21:48.905040   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:21:48.915393   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:21:49.145081   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:21:52.033314   63744 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:21:52.033383   63744 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:21:52.033489   63744 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:21:52.033625   63744 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:21:52.033705   63744 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:21:52.033799   63744 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:21:52.035555   63744 out.go:235]   - Generating certificates and keys ...
	I1009 20:21:52.035638   63744 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:21:52.035737   63744 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:21:52.035861   63744 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:21:52.035951   63744 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:21:52.036043   63744 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:21:52.036135   63744 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:21:52.036233   63744 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:21:52.036325   63744 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:21:52.036431   63744 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:21:52.036584   63744 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:21:52.036656   63744 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:21:52.036737   63744 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:21:52.036831   63744 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:21:52.036914   63744 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:21:52.036985   63744 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:21:52.037077   63744 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:21:52.037157   63744 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:21:52.037280   63744 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:21:52.037372   63744 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:21:52.038777   63744 out.go:235]   - Booting up control plane ...
	I1009 20:21:52.038872   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:21:52.038995   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:21:52.039101   63744 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:21:52.039242   63744 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:21:52.039338   63744 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:21:52.039393   63744 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:21:52.039593   63744 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:21:52.039746   63744 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:21:52.039813   63744 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.005827851s
	I1009 20:21:52.039917   63744 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:21:52.039996   63744 kubeadm.go:310] [api-check] The API server is healthy after 4.502512954s
	I1009 20:21:52.040127   63744 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:21:52.040319   63744 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:21:52.040402   63744 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:21:52.040606   63744 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-503330 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:21:52.040684   63744 kubeadm.go:310] [bootstrap-token] Using token: 69fwjj.t1glswhsta5w4zx2
	I1009 20:21:52.042352   63744 out.go:235]   - Configuring RBAC rules ...
	I1009 20:21:52.042456   63744 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:21:52.042526   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:21:52.042664   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:21:52.042773   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:21:52.042868   63744 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:21:52.042948   63744 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:21:52.043119   63744 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:21:52.043184   63744 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:21:52.043250   63744 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:21:52.043258   63744 kubeadm.go:310] 
	I1009 20:21:52.043360   63744 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:21:52.043377   63744 kubeadm.go:310] 
	I1009 20:21:52.043504   63744 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:21:52.043516   63744 kubeadm.go:310] 
	I1009 20:21:52.043554   63744 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:21:52.043639   63744 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:21:52.043711   63744 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:21:52.043721   63744 kubeadm.go:310] 
	I1009 20:21:52.043792   63744 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:21:52.043800   63744 kubeadm.go:310] 
	I1009 20:21:52.043838   63744 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:21:52.043844   63744 kubeadm.go:310] 
	I1009 20:21:52.043909   63744 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:21:52.044021   63744 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:21:52.044108   63744 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:21:52.044117   63744 kubeadm.go:310] 
	I1009 20:21:52.044225   63744 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:21:52.044350   63744 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:21:52.044365   63744 kubeadm.go:310] 
	I1009 20:21:52.044462   63744 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044591   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:21:52.044619   63744 kubeadm.go:310] 	--control-plane 
	I1009 20:21:52.044624   63744 kubeadm.go:310] 
	I1009 20:21:52.044732   63744 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:21:52.044739   63744 kubeadm.go:310] 
	I1009 20:21:52.044842   63744 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 69fwjj.t1glswhsta5w4zx2 \
	I1009 20:21:52.044956   63744 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:21:52.044967   63744 cni.go:84] Creating CNI manager for ""
	I1009 20:21:52.044973   63744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:21:52.047342   63744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:21:48.899752   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:51.398734   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:52.048508   63744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:21:52.060338   63744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:21:52.079526   63744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:21:52.079580   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.079669   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-503330 minikube.k8s.io/updated_at=2024_10_09T20_21_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=embed-certs-503330 minikube.k8s.io/primary=true
	I1009 20:21:52.296281   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:52.296296   63744 ops.go:34] apiserver oom_adj: -16
	I1009 20:21:52.796429   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.296570   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:53.797269   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.297261   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:54.797049   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.297194   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:55.796896   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.296658   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.796494   63744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:21:56.904248   63744 kubeadm.go:1113] duration metric: took 4.824720684s to wait for elevateKubeSystemPrivileges
	I1009 20:21:56.904284   63744 kubeadm.go:394] duration metric: took 5m1.847540023s to StartCluster
	I1009 20:21:56.904302   63744 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.904390   63744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:21:56.906918   63744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:21:56.907263   63744 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.97 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:21:56.907349   63744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:21:56.907451   63744 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-503330"
	I1009 20:21:56.907487   63744 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-503330"
	I1009 20:21:56.907486   63744 addons.go:69] Setting default-storageclass=true in profile "embed-certs-503330"
	W1009 20:21:56.907496   63744 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:21:56.907502   63744 addons.go:69] Setting metrics-server=true in profile "embed-certs-503330"
	I1009 20:21:56.907527   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907540   63744 config.go:182] Loaded profile config "embed-certs-503330": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:21:56.907529   63744 addons.go:234] Setting addon metrics-server=true in "embed-certs-503330"
	W1009 20:21:56.907616   63744 addons.go:243] addon metrics-server should already be in state true
	I1009 20:21:56.907642   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.907508   63744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-503330"
	I1009 20:21:56.907976   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908018   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908038   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908061   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.908072   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.908105   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.909166   63744 out.go:177] * Verifying Kubernetes components...
	I1009 20:21:56.910945   63744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:21:56.924607   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I1009 20:21:56.925089   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.925624   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.925643   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.926009   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.926194   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.927999   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38425
	I1009 20:21:56.928182   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I1009 20:21:56.929496   63744 addons.go:234] Setting addon default-storageclass=true in "embed-certs-503330"
	W1009 20:21:56.929513   63744 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:21:56.929533   63744 host.go:66] Checking if "embed-certs-503330" exists ...
	I1009 20:21:56.929779   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.929804   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.930111   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930148   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.930590   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930607   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930727   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.930742   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.930950   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931022   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.931541   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.931583   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.932246   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.932292   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.945160   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43465
	I1009 20:21:56.945657   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.946102   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.946128   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.946469   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.947002   63744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:21:56.947044   63744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:21:56.951951   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I1009 20:21:56.952409   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.952851   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32831
	I1009 20:21:56.953051   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953068   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.953331   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.953407   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.953561   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.953830   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.953854   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.954204   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.954381   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.956314   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.956515   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.958947   63744 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:21:56.959026   63744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:21:53.898455   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:55.898680   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:57.899675   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:56.961002   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:21:56.961019   63744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:21:56.961036   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.961188   63744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:56.961206   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:21:56.961219   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.964087   63744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45297
	I1009 20:21:56.964490   63744 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:21:56.964644   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965040   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965298   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965511   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965539   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965577   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.965600   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.965761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965876   63744 main.go:141] libmachine: Using API Version  1
	I1009 20:21:56.965901   63744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:21:56.965901   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.965958   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966041   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.966083   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.966324   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:56.967052   63744 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:21:56.967288   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetState
	I1009 20:21:56.968690   63744 main.go:141] libmachine: (embed-certs-503330) Calling .DriverName
	I1009 20:21:56.968865   63744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:56.968880   63744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:21:56.968902   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHHostname
	I1009 20:21:56.971293   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971661   63744 main.go:141] libmachine: (embed-certs-503330) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:23:dc", ip: ""} in network mk-embed-certs-503330: {Iface:virbr2 ExpiryTime:2024-10-09 21:16:41 +0000 UTC Type:0 Mac:52:54:00:20:23:dc Iaid: IPaddr:192.168.50.97 Prefix:24 Hostname:embed-certs-503330 Clientid:01:52:54:00:20:23:dc}
	I1009 20:21:56.971682   63744 main.go:141] libmachine: (embed-certs-503330) DBG | domain embed-certs-503330 has defined IP address 192.168.50.97 and MAC address 52:54:00:20:23:dc in network mk-embed-certs-503330
	I1009 20:21:56.971807   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHPort
	I1009 20:21:56.971975   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHKeyPath
	I1009 20:21:56.972115   63744 main.go:141] libmachine: (embed-certs-503330) Calling .GetSSHUsername
	I1009 20:21:56.972249   63744 sshutil.go:53] new ssh client: &{IP:192.168.50.97 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/embed-certs-503330/id_rsa Username:docker}
	I1009 20:21:57.140847   63744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:21:57.160702   63744 node_ready.go:35] waiting up to 6m0s for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172751   63744 node_ready.go:49] node "embed-certs-503330" has status "Ready":"True"
	I1009 20:21:57.172781   63744 node_ready.go:38] duration metric: took 12.05112ms for node "embed-certs-503330" to be "Ready" ...
	I1009 20:21:57.172794   63744 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:21:57.181089   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:21:57.242001   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:21:57.263153   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:21:57.263173   63744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:21:57.302934   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:21:57.302962   63744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:21:57.335796   63744 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.335822   63744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:21:57.361537   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:21:57.418449   63744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:21:57.903919   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.903945   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904232   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904252   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:57.904261   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:57.904269   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:57.904289   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:57.904560   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:57.904578   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131399   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131433   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131434   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131451   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131717   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131742   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131750   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131761   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131762   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131792   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.131796   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131847   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.131861   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.131869   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.131972   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.131986   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133342   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.133353   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.133363   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.133372   63744 addons.go:475] Verifying addon metrics-server=true in "embed-certs-503330"
	I1009 20:21:58.148066   63744 main.go:141] libmachine: Making call to close driver server
	I1009 20:21:58.148090   63744 main.go:141] libmachine: (embed-certs-503330) Calling .Close
	I1009 20:21:58.148302   63744 main.go:141] libmachine: (embed-certs-503330) DBG | Closing plugin on server side
	I1009 20:21:58.148304   63744 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:21:58.148331   63744 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:21:58.149874   63744 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass
	I1009 20:21:58.151249   63744 addons.go:510] duration metric: took 1.243909023s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass]
	I1009 20:22:00.398702   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:02.898157   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:21:59.187137   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:01.686294   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:03.687302   63744 pod_ready.go:103] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:04.187813   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:04.187838   63744 pod_ready.go:82] duration metric: took 7.006724226s for pod "coredns-7c65d6cfc9-j62fb" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:04.187847   63744 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693964   63744 pod_ready.go:93] pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.693989   63744 pod_ready.go:82] duration metric: took 1.506136012s for pod "coredns-7c65d6cfc9-sttbg" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.693999   63744 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698244   63744 pod_ready.go:93] pod "etcd-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.698263   63744 pod_ready.go:82] duration metric: took 4.258915ms for pod "etcd-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.698272   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702503   63744 pod_ready.go:93] pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.702523   63744 pod_ready.go:82] duration metric: took 4.24469ms for pod "kube-apiserver-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.702534   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706794   63744 pod_ready.go:93] pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.706814   63744 pod_ready.go:82] duration metric: took 4.272023ms for pod "kube-controller-manager-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.706824   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785041   63744 pod_ready.go:93] pod "kube-proxy-k4sqz" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:05.785063   63744 pod_ready.go:82] duration metric: took 78.232276ms for pod "kube-proxy-k4sqz" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:05.785072   63744 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185082   63744 pod_ready.go:93] pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:06.185107   63744 pod_ready.go:82] duration metric: took 400.026614ms for pod "kube-scheduler-embed-certs-503330" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:06.185118   63744 pod_ready.go:39] duration metric: took 9.012311475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:06.185134   63744 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:06.185190   63744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:06.200274   63744 api_server.go:72] duration metric: took 9.292974134s to wait for apiserver process to appear ...
	I1009 20:22:06.200300   63744 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:06.200319   63744 api_server.go:253] Checking apiserver healthz at https://192.168.50.97:8443/healthz ...
	I1009 20:22:06.204606   63744 api_server.go:279] https://192.168.50.97:8443/healthz returned 200:
	ok
	I1009 20:22:06.205489   63744 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:06.205507   63744 api_server.go:131] duration metric: took 5.200899ms to wait for apiserver health ...
	I1009 20:22:06.205515   63744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:06.387526   63744 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:06.387560   63744 system_pods.go:61] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.387566   63744 system_pods.go:61] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.387569   63744 system_pods.go:61] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.387572   63744 system_pods.go:61] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.387576   63744 system_pods.go:61] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.387580   63744 system_pods.go:61] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.387584   63744 system_pods.go:61] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.387589   63744 system_pods.go:61] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.387595   63744 system_pods.go:61] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.387604   63744 system_pods.go:74] duration metric: took 182.083801ms to wait for pod list to return data ...
	I1009 20:22:06.387614   63744 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:06.585261   63744 default_sa.go:45] found service account: "default"
	I1009 20:22:06.585283   63744 default_sa.go:55] duration metric: took 197.662514ms for default service account to be created ...
	I1009 20:22:06.585292   63744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:06.788380   63744 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:06.788405   63744 system_pods.go:89] "coredns-7c65d6cfc9-j62fb" [ecbf7b08-3855-42ca-a144-2cada67e9d09] Running
	I1009 20:22:06.788410   63744 system_pods.go:89] "coredns-7c65d6cfc9-sttbg" [453ffb79-d6d0-4ba4-baf6-cbc00df68cc2] Running
	I1009 20:22:06.788414   63744 system_pods.go:89] "etcd-embed-certs-503330" [9132b8d3-ef82-4b77-a3d9-9209949628f1] Running
	I1009 20:22:06.788418   63744 system_pods.go:89] "kube-apiserver-embed-certs-503330" [88d48cd5-4f8d-48c2-bbe4-35a39b85c641] Running
	I1009 20:22:06.788421   63744 system_pods.go:89] "kube-controller-manager-embed-certs-503330" [eccf20f8-f5f3-4f1c-8ecc-d37b3fe8a005] Running
	I1009 20:22:06.788425   63744 system_pods.go:89] "kube-proxy-k4sqz" [e699a0fc-e2f4-45b5-960b-54c2a4a35b87] Running
	I1009 20:22:06.788428   63744 system_pods.go:89] "kube-scheduler-embed-certs-503330" [574d0f83-ff4b-4a50-9aae-20addd3941db] Running
	I1009 20:22:06.788433   63744 system_pods.go:89] "metrics-server-6867b74b74-79m5x" [c28befcf-7206-4b43-a6ef-6fa017fac7a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:06.788437   63744 system_pods.go:89] "storage-provisioner" [13817757-5de5-44be-9976-cb3bda284db8] Running
	I1009 20:22:06.788445   63744 system_pods.go:126] duration metric: took 203.147541ms to wait for k8s-apps to be running ...
	I1009 20:22:06.788454   63744 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:06.788493   63744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:06.808681   63744 system_svc.go:56] duration metric: took 20.217422ms WaitForService to wait for kubelet
	I1009 20:22:06.808710   63744 kubeadm.go:582] duration metric: took 9.901411942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:06.808733   63744 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:06.984902   63744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:06.984932   63744 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:06.984945   63744 node_conditions.go:105] duration metric: took 176.206313ms to run NodePressure ...
	I1009 20:22:06.984958   63744 start.go:241] waiting for startup goroutines ...
	I1009 20:22:06.984968   63744 start.go:246] waiting for cluster config update ...
	I1009 20:22:06.984981   63744 start.go:255] writing updated cluster config ...
	I1009 20:22:06.985286   63744 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:07.038935   63744 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:07.040555   63744 out.go:177] * Done! kubectl is now configured to use "embed-certs-503330" cluster and "default" namespace by default
	I1009 20:22:07.095426   64109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.220236459s)
	I1009 20:22:07.095500   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:07.112458   64109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 20:22:07.126942   64109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:22:07.140284   64109 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:22:07.140304   64109 kubeadm.go:157] found existing configuration files:
	
	I1009 20:22:07.140349   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1009 20:22:07.150051   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:22:07.150089   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:22:07.159508   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1009 20:22:07.169670   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:22:07.169724   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:22:07.179378   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.189534   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:22:07.189590   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:22:07.198752   64109 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1009 20:22:07.207878   64109 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:22:07.207922   64109 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:22:07.217131   64109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:22:07.272837   64109 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 20:22:07.272983   64109 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:22:07.390966   64109 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:22:07.391157   64109 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:22:07.391298   64109 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 20:22:07.402064   64109 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:22:07.404170   64109 out.go:235]   - Generating certificates and keys ...
	I1009 20:22:07.404277   64109 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:22:07.404377   64109 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:22:07.404500   64109 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:22:07.404594   64109 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:22:07.404709   64109 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:22:07.404798   64109 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:22:07.404891   64109 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:22:07.404980   64109 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:22:07.405087   64109 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:22:07.405184   64109 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:22:07.405257   64109 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:22:07.405339   64109 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:22:04.898623   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:06.899217   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:07.573252   64109 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:22:07.929073   64109 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 20:22:08.151802   64109 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:22:08.220927   64109 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:22:08.351546   64109 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:22:08.352048   64109 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:22:08.354486   64109 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:22:08.356298   64109 out.go:235]   - Booting up control plane ...
	I1009 20:22:08.356416   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:22:08.356497   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:22:08.356564   64109 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:22:08.376381   64109 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:22:08.383479   64109 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:22:08.383861   64109 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:22:08.515158   64109 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 20:22:08.515282   64109 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 20:22:09.516371   64109 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001248976s
	I1009 20:22:09.516460   64109 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 20:22:09.398667   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:11.898547   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:14.518560   64109 kubeadm.go:310] [api-check] The API server is healthy after 5.002267352s
	I1009 20:22:14.535812   64109 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 20:22:14.551918   64109 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 20:22:14.575035   64109 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 20:22:14.575281   64109 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-733270 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 20:22:14.589604   64109 kubeadm.go:310] [bootstrap-token] Using token: q60nq5.9zsgiaeid5aito18
	I1009 20:22:14.590971   64109 out.go:235]   - Configuring RBAC rules ...
	I1009 20:22:14.591128   64109 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 20:22:14.597327   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 20:22:14.605584   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 20:22:14.608650   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 20:22:14.614771   64109 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 20:22:14.618089   64109 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 20:22:14.929271   64109 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 20:22:15.378546   64109 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 20:22:15.929242   64109 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 20:22:15.930222   64109 kubeadm.go:310] 
	I1009 20:22:15.930305   64109 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 20:22:15.930314   64109 kubeadm.go:310] 
	I1009 20:22:15.930395   64109 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 20:22:15.930423   64109 kubeadm.go:310] 
	I1009 20:22:15.930468   64109 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 20:22:15.930569   64109 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 20:22:15.930635   64109 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 20:22:15.930643   64109 kubeadm.go:310] 
	I1009 20:22:15.930711   64109 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 20:22:15.930718   64109 kubeadm.go:310] 
	I1009 20:22:15.930758   64109 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 20:22:15.930764   64109 kubeadm.go:310] 
	I1009 20:22:15.930807   64109 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 20:22:15.930874   64109 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 20:22:15.930933   64109 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 20:22:15.930939   64109 kubeadm.go:310] 
	I1009 20:22:15.931013   64109 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 20:22:15.931138   64109 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 20:22:15.931150   64109 kubeadm.go:310] 
	I1009 20:22:15.931258   64109 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931411   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 \
	I1009 20:22:15.931450   64109 kubeadm.go:310] 	--control-plane 
	I1009 20:22:15.931460   64109 kubeadm.go:310] 
	I1009 20:22:15.931560   64109 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 20:22:15.931569   64109 kubeadm.go:310] 
	I1009 20:22:15.931668   64109 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token q60nq5.9zsgiaeid5aito18 \
	I1009 20:22:15.931824   64109 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b723cd7b42b34300084de1b8c6a59ec539815581619f4c4935af95f4653aa2e8 
	I1009 20:22:15.933191   64109 kubeadm.go:310] W1009 20:22:07.220393    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933602   64109 kubeadm.go:310] W1009 20:22:07.223065    2513 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 20:22:15.933757   64109 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:22:15.933786   64109 cni.go:84] Creating CNI manager for ""
	I1009 20:22:15.933800   64109 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 20:22:15.935449   64109 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1009 20:22:15.936759   64109 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1009 20:22:15.947648   64109 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1009 20:22:15.966343   64109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 20:22:15.966422   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:15.966483   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-733270 minikube.k8s.io/updated_at=2024_10_09T20_22_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=default-k8s-diff-port-733270 minikube.k8s.io/primary=true
	I1009 20:22:16.186232   64109 ops.go:34] apiserver oom_adj: -16
	I1009 20:22:16.186379   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:16.686824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:17.187316   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:14.398119   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:16.399791   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:17.687381   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.186824   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:18.687500   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.187331   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.687194   64109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 20:22:19.767575   64109 kubeadm.go:1113] duration metric: took 3.801217416s to wait for elevateKubeSystemPrivileges
	I1009 20:22:19.767611   64109 kubeadm.go:394] duration metric: took 5m1.132732036s to StartCluster
	I1009 20:22:19.767631   64109 settings.go:142] acquiring lock: {Name:mk8de1dbe22bf1a48900a8f193b006362ecb6815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.767719   64109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:22:19.769461   64109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/kubeconfig: {Name:mk30029f59f88b81a5c9b836ef9f873e0abbfc96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 20:22:19.769695   64109 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.134 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 20:22:19.769758   64109 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1009 20:22:19.769856   64109 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769884   64109 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-733270"
	I1009 20:22:19.769881   64109 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.769894   64109 config.go:182] Loaded profile config "default-k8s-diff-port-733270": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:22:19.769908   64109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-733270"
	W1009 20:22:19.769897   64109 addons.go:243] addon storage-provisioner should already be in state true
	I1009 20:22:19.769970   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.769892   64109 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-733270"
	I1009 20:22:19.770056   64109 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.770069   64109 addons.go:243] addon metrics-server should already be in state true
	I1009 20:22:19.770116   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.770324   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770356   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770364   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770392   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.770486   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.770522   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.771540   64109 out.go:177] * Verifying Kubernetes components...
	I1009 20:22:19.772979   64109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 20:22:19.785692   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I1009 20:22:19.785792   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41085
	I1009 20:22:19.786095   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786204   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.786608   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786629   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786759   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.786776   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.786948   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.787422   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.787449   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.787843   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.788015   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.788974   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34217
	I1009 20:22:19.789282   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.789751   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.789772   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.791379   64109 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-733270"
	W1009 20:22:19.791400   64109 addons.go:243] addon default-storageclass should already be in state true
	I1009 20:22:19.791428   64109 host.go:66] Checking if "default-k8s-diff-port-733270" exists ...
	I1009 20:22:19.791601   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.791796   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.791834   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.792113   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.792147   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.806661   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
	I1009 20:22:19.807178   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41309
	I1009 20:22:19.807283   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807700   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.807966   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.807989   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808200   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.808223   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.808407   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808586   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.808629   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.808811   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.810504   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810633   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.810671   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1009 20:22:19.811047   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.811579   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.811602   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.811962   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.812375   64109 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 20:22:19.812404   64109 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 20:22:19.812666   64109 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1009 20:22:19.812673   64109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 20:22:19.814145   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 20:22:19.814160   64109 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 20:22:19.814173   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.814293   64109 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:19.814308   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 20:22:19.814324   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.817244   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818718   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.818744   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.818881   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.818956   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819037   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819240   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.819401   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.819677   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.819697   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.819713   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.819831   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.819990   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.820176   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.831920   64109 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35349
	I1009 20:22:19.832278   64109 main.go:141] libmachine: () Calling .GetVersion
	I1009 20:22:19.832725   64109 main.go:141] libmachine: Using API Version  1
	I1009 20:22:19.832757   64109 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 20:22:19.833093   64109 main.go:141] libmachine: () Calling .GetMachineName
	I1009 20:22:19.833271   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetState
	I1009 20:22:19.834841   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .DriverName
	I1009 20:22:19.835042   64109 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:19.835074   64109 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 20:22:19.835094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHHostname
	I1009 20:22:19.837916   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838611   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b6:c5:b9", ip: ""} in network mk-default-k8s-diff-port-733270: {Iface:virbr4 ExpiryTime:2024-10-09 21:17:00 +0000 UTC Type:0 Mac:52:54:00:b6:c5:b9 Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:default-k8s-diff-port-733270 Clientid:01:52:54:00:b6:c5:b9}
	I1009 20:22:19.838651   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | domain default-k8s-diff-port-733270 has defined IP address 192.168.72.134 and MAC address 52:54:00:b6:c5:b9 in network mk-default-k8s-diff-port-733270
	I1009 20:22:19.838759   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHPort
	I1009 20:22:19.838927   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHKeyPath
	I1009 20:22:19.839075   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .GetSSHUsername
	I1009 20:22:19.839216   64109 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/default-k8s-diff-port-733270/id_rsa Username:docker}
	I1009 20:22:19.968622   64109 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 20:22:19.988987   64109 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005886   64109 node_ready.go:49] node "default-k8s-diff-port-733270" has status "Ready":"True"
	I1009 20:22:20.005909   64109 node_ready.go:38] duration metric: took 16.891882ms for node "default-k8s-diff-port-733270" to be "Ready" ...
	I1009 20:22:20.005920   64109 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:20.015076   64109 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:20.072480   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 20:22:20.072517   64109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1009 20:22:20.089167   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 20:22:20.101256   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 20:22:20.128261   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 20:22:20.128310   64109 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 20:22:20.166749   64109 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.166772   64109 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 20:22:20.250822   64109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 20:22:20.802064   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802094   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802142   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802174   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802449   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802462   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802465   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.802471   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802479   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.802482   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.802490   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.802503   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.804339   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804345   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804381   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.804403   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.804413   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.804426   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:20.820127   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:20.820148   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:20.820509   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:20.820526   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:20.820558   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.348946   64109 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.098079149s)
	I1009 20:22:21.349009   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349024   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349347   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349396   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349404   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349420   64109 main.go:141] libmachine: Making call to close driver server
	I1009 20:22:21.349428   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) Calling .Close
	I1009 20:22:21.349689   64109 main.go:141] libmachine: (default-k8s-diff-port-733270) DBG | Closing plugin on server side
	I1009 20:22:21.349748   64109 main.go:141] libmachine: Successfully made call to close driver server
	I1009 20:22:21.349774   64109 main.go:141] libmachine: Making call to close connection to plugin binary
	I1009 20:22:21.349788   64109 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-733270"
	I1009 20:22:21.351765   64109 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1009 20:22:21.352876   64109 addons.go:510] duration metric: took 1.58312679s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1009 20:22:22.021876   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:18.401861   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:20.899295   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:24.521853   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.021730   64109 pod_ready.go:103] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:23.399283   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:25.897649   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:27.897899   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:28.021952   64109 pod_ready.go:93] pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.021974   64109 pod_ready.go:82] duration metric: took 8.006873591s for pod "etcd-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.021983   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026148   64109 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.026167   64109 pod_ready.go:82] duration metric: took 4.178272ms for pod "kube-apiserver-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.026176   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029955   64109 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.029976   64109 pod_ready.go:82] duration metric: took 3.792606ms for pod "kube-controller-manager-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.029986   64109 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033674   64109 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace has status "Ready":"True"
	I1009 20:22:28.033690   64109 pod_ready.go:82] duration metric: took 3.698391ms for pod "kube-scheduler-default-k8s-diff-port-733270" in "kube-system" namespace to be "Ready" ...
	I1009 20:22:28.033697   64109 pod_ready.go:39] duration metric: took 8.027766695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:28.033709   64109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:28.033754   64109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:28.057802   64109 api_server.go:72] duration metric: took 8.288077751s to wait for apiserver process to appear ...
	I1009 20:22:28.057830   64109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:28.057850   64109 api_server.go:253] Checking apiserver healthz at https://192.168.72.134:8444/healthz ...
	I1009 20:22:28.069876   64109 api_server.go:279] https://192.168.72.134:8444/healthz returned 200:
	ok
	I1009 20:22:28.071652   64109 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:28.071676   64109 api_server.go:131] duration metric: took 13.838153ms to wait for apiserver health ...
	I1009 20:22:28.071684   64109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:28.083482   64109 system_pods.go:59] 9 kube-system pods found
	I1009 20:22:28.083504   64109 system_pods.go:61] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.083509   64109 system_pods.go:61] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.083513   64109 system_pods.go:61] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.083516   64109 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.083520   64109 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.083523   64109 system_pods.go:61] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.083526   64109 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.083531   64109 system_pods.go:61] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.083535   64109 system_pods.go:61] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.083542   64109 system_pods.go:74] duration metric: took 11.853134ms to wait for pod list to return data ...
	I1009 20:22:28.083548   64109 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:28.086146   64109 default_sa.go:45] found service account: "default"
	I1009 20:22:28.086165   64109 default_sa.go:55] duration metric: took 2.611433ms for default service account to be created ...
	I1009 20:22:28.086173   64109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:28.223233   64109 system_pods.go:86] 9 kube-system pods found
	I1009 20:22:28.223260   64109 system_pods.go:89] "coredns-7c65d6cfc9-6644x" [f598059d-a036-45df-885c-95efd04424d9] Running
	I1009 20:22:28.223266   64109 system_pods.go:89] "coredns-7c65d6cfc9-8x9ns" [08e5e8e5-f679-486e-b1a5-69eb7b46d49e] Running
	I1009 20:22:28.223270   64109 system_pods.go:89] "etcd-default-k8s-diff-port-733270" [6d71f71b-6c43-42eb-8408-4670fa4a3777] Running
	I1009 20:22:28.223274   64109 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-733270" [c93a86db-93f5-43be-a3c0-7d905ce05b64] Running
	I1009 20:22:28.223278   64109 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-733270" [1c52c255-919d-45be-af7e-ce40f58dac1e] Running
	I1009 20:22:28.223281   64109 system_pods.go:89] "kube-proxy-6klwf" [fb78cea4-6c44-4a04-a75b-6ed061c1ecdf] Running
	I1009 20:22:28.223285   64109 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-733270" [fac8b55c-6288-4cbc-b595-004150a4ee4e] Running
	I1009 20:22:28.223291   64109 system_pods.go:89] "metrics-server-6867b74b74-srjrs" [9fe02f22-4b36-4d68-bdf8-51d66609567a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:28.223295   64109 system_pods.go:89] "storage-provisioner" [90f34170-4cef-4daa-ad01-14999b6f1110] Running
	I1009 20:22:28.223303   64109 system_pods.go:126] duration metric: took 137.124429ms to wait for k8s-apps to be running ...
	I1009 20:22:28.223310   64109 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:28.223352   64109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:28.239300   64109 system_svc.go:56] duration metric: took 15.983195ms WaitForService to wait for kubelet
	I1009 20:22:28.239324   64109 kubeadm.go:582] duration metric: took 8.469605426s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:28.239341   64109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:28.419917   64109 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:28.419940   64109 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:28.419951   64109 node_conditions.go:105] duration metric: took 180.606696ms to run NodePressure ...
	I1009 20:22:28.419962   64109 start.go:241] waiting for startup goroutines ...
	I1009 20:22:28.419969   64109 start.go:246] waiting for cluster config update ...
	I1009 20:22:28.419978   64109 start.go:255] writing updated cluster config ...
	I1009 20:22:28.420224   64109 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:28.467253   64109 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:28.469239   64109 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-733270" cluster and "default" namespace by default
	I1009 20:22:29.898528   63427 pod_ready.go:103] pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace has status "Ready":"False"
	I1009 20:22:31.897863   63427 pod_ready.go:82] duration metric: took 4m0.005763954s for pod "metrics-server-6867b74b74-fhcfl" in "kube-system" namespace to be "Ready" ...
	E1009 20:22:31.897884   63427 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1009 20:22:31.897892   63427 pod_ready.go:39] duration metric: took 4m2.806165062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 20:22:31.897906   63427 api_server.go:52] waiting for apiserver process to appear ...
	I1009 20:22:31.897930   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:31.897972   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:31.945643   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:31.945667   63427 cri.go:89] found id: ""
	I1009 20:22:31.945677   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:31.945720   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.949923   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:31.950018   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:31.989365   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:31.989391   63427 cri.go:89] found id: ""
	I1009 20:22:31.989401   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:31.989451   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:31.993865   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:31.993926   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:32.030658   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.030678   63427 cri.go:89] found id: ""
	I1009 20:22:32.030685   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:32.030731   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.034587   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:32.034647   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:32.078482   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.078508   63427 cri.go:89] found id: ""
	I1009 20:22:32.078516   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:32.078570   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.082565   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:32.082626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:32.118355   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.118379   63427 cri.go:89] found id: ""
	I1009 20:22:32.118388   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:32.118444   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.123110   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:32.123170   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:32.163052   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.163077   63427 cri.go:89] found id: ""
	I1009 20:22:32.163085   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:32.163137   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.167085   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:32.167146   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:32.201126   63427 cri.go:89] found id: ""
	I1009 20:22:32.201149   63427 logs.go:282] 0 containers: []
	W1009 20:22:32.201156   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:32.201161   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:32.201217   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:32.242235   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.242259   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.242265   63427 cri.go:89] found id: ""
	I1009 20:22:32.242274   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:32.242337   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.247127   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:32.250692   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:32.250712   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:32.301343   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:32.301368   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:32.347256   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:32.347283   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:32.485223   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:32.485263   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:32.530013   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:32.530054   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:32.580422   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:32.580447   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:32.625202   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:32.625237   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:32.664203   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:32.664230   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:32.701753   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:32.701782   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:32.741584   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:32.741610   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:32.779976   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:32.780003   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:32.848844   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:32.848875   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:32.871387   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:32.871416   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:35.836255   63427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 20:22:35.853510   63427 api_server.go:72] duration metric: took 4m14.501873287s to wait for apiserver process to appear ...
	I1009 20:22:35.853541   63427 api_server.go:88] waiting for apiserver healthz status ...
	I1009 20:22:35.853583   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:35.853626   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:35.889199   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:35.889228   63427 cri.go:89] found id: ""
	I1009 20:22:35.889237   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:35.889299   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.893644   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:35.893706   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:35.934151   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:35.934178   63427 cri.go:89] found id: ""
	I1009 20:22:35.934188   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:35.934244   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.938561   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:35.938618   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:35.974555   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:35.974579   63427 cri.go:89] found id: ""
	I1009 20:22:35.974588   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:35.974639   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:35.978468   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:35.978514   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:36.014292   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.014316   63427 cri.go:89] found id: ""
	I1009 20:22:36.014324   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:36.014366   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.018618   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:36.018672   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:36.059334   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.059366   63427 cri.go:89] found id: ""
	I1009 20:22:36.059377   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:36.059436   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.063552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:36.063612   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:36.098384   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.098404   63427 cri.go:89] found id: ""
	I1009 20:22:36.098413   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:36.098464   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.102428   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:36.102490   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:36.140422   63427 cri.go:89] found id: ""
	I1009 20:22:36.140451   63427 logs.go:282] 0 containers: []
	W1009 20:22:36.140461   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:36.140467   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:36.140524   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:36.178576   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.178600   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.178604   63427 cri.go:89] found id: ""
	I1009 20:22:36.178610   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:36.178662   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.183208   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:36.186971   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:36.186994   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:36.222365   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:36.222389   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:36.652499   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:36.652533   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:36.700493   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:36.700523   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:36.715630   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:36.715657   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:36.757738   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:36.757766   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:36.793469   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:36.793491   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:36.833374   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:36.833400   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:36.894545   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:36.894579   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:36.932407   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:36.932441   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:36.969165   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:36.969198   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:37.039100   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:37.039138   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:37.141855   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:37.141889   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.701118   63427 api_server.go:253] Checking apiserver healthz at https://192.168.39.162:8443/healthz ...
	I1009 20:22:39.705369   63427 api_server.go:279] https://192.168.39.162:8443/healthz returned 200:
	ok
	I1009 20:22:39.706731   63427 api_server.go:141] control plane version: v1.31.1
	I1009 20:22:39.706750   63427 api_server.go:131] duration metric: took 3.853202912s to wait for apiserver health ...
	I1009 20:22:39.706757   63427 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 20:22:39.706777   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:22:39.706821   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:22:39.745203   63427 cri.go:89] found id: "42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:39.745227   63427 cri.go:89] found id: ""
	I1009 20:22:39.745234   63427 logs.go:282] 1 containers: [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4]
	I1009 20:22:39.745277   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.749708   63427 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:22:39.749768   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:22:39.786606   63427 cri.go:89] found id: "9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:39.786629   63427 cri.go:89] found id: ""
	I1009 20:22:39.786637   63427 logs.go:282] 1 containers: [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da]
	I1009 20:22:39.786681   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.790981   63427 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:22:39.791036   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:22:39.826615   63427 cri.go:89] found id: "3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:39.826635   63427 cri.go:89] found id: ""
	I1009 20:22:39.826642   63427 logs.go:282] 1 containers: [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d]
	I1009 20:22:39.826710   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.831189   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:22:39.831260   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:22:39.867300   63427 cri.go:89] found id: "c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:39.867320   63427 cri.go:89] found id: ""
	I1009 20:22:39.867327   63427 logs.go:282] 1 containers: [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2]
	I1009 20:22:39.867373   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.871552   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:22:39.871606   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:22:39.905493   63427 cri.go:89] found id: "355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:39.905513   63427 cri.go:89] found id: ""
	I1009 20:22:39.905521   63427 logs.go:282] 1 containers: [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915]
	I1009 20:22:39.905565   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.910653   63427 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:22:39.910704   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:22:39.952830   63427 cri.go:89] found id: "71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:39.952848   63427 cri.go:89] found id: ""
	I1009 20:22:39.952856   63427 logs.go:282] 1 containers: [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783]
	I1009 20:22:39.952901   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:39.957366   63427 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:22:39.957434   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:22:39.993913   63427 cri.go:89] found id: ""
	I1009 20:22:39.993936   63427 logs.go:282] 0 containers: []
	W1009 20:22:39.993943   63427 logs.go:284] No container was found matching "kindnet"
	I1009 20:22:39.993949   63427 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1009 20:22:39.993993   63427 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1009 20:22:40.036654   63427 cri.go:89] found id: "a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.036680   63427 cri.go:89] found id: "8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.036685   63427 cri.go:89] found id: ""
	I1009 20:22:40.036694   63427 logs.go:282] 2 containers: [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c]
	I1009 20:22:40.036752   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.041168   63427 ssh_runner.go:195] Run: which crictl
	I1009 20:22:40.045050   63427 logs.go:123] Gathering logs for dmesg ...
	I1009 20:22:40.045073   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:22:40.059862   63427 logs.go:123] Gathering logs for etcd [9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da] ...
	I1009 20:22:40.059890   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c72eddc313723a9b7a2960084d9aa04176254d999e0d4b14f5e98f0cc6ca5da"
	I1009 20:22:40.098698   63427 logs.go:123] Gathering logs for kube-scheduler [c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2] ...
	I1009 20:22:40.098725   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6154b0051dbcc61bc7b235f3051819718376d17362eb4b40331c24e3e5990b2"
	I1009 20:22:40.136003   63427 logs.go:123] Gathering logs for kube-controller-manager [71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783] ...
	I1009 20:22:40.136028   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 71cf38b8d409674cc989cc817b3a908c25d9a0039055e4103984d47f01750783"
	I1009 20:22:40.192473   63427 logs.go:123] Gathering logs for storage-provisioner [a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544] ...
	I1009 20:22:40.192499   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a672e8a67e92b8bdd57381661e5d866c7c71126b5b4eb6ee66c47c4c80bcb544"
	I1009 20:22:40.228548   63427 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:22:40.228575   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:22:40.634922   63427 logs.go:123] Gathering logs for kubelet ...
	I1009 20:22:40.634956   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:22:40.701278   63427 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:22:40.701313   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 20:22:40.813881   63427 logs.go:123] Gathering logs for kube-apiserver [42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4] ...
	I1009 20:22:40.813915   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42cddfd08cd98b5b7c3a4126a1e8770d82b83d4b042e1a1c491b33abc52c32e4"
	I1009 20:22:40.874590   63427 logs.go:123] Gathering logs for coredns [3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d] ...
	I1009 20:22:40.874619   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f0da5a79567ca6346eca0c6fb0114efc26c074d90ff33f3109aa4be24933c5d"
	I1009 20:22:40.916558   63427 logs.go:123] Gathering logs for kube-proxy [355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915] ...
	I1009 20:22:40.916585   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355de783599f29358754f18d308a5eedcc55643199063be406e4bf768e684915"
	I1009 20:22:40.959294   63427 logs.go:123] Gathering logs for storage-provisioner [8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c] ...
	I1009 20:22:40.959323   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8a3298f9f8701aad7ebb570bc94dc6471c7695d6686f7bb5d5a277abdad3d29c"
	I1009 20:22:40.997037   63427 logs.go:123] Gathering logs for container status ...
	I1009 20:22:40.997065   63427 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 20:22:43.555901   63427 system_pods.go:59] 8 kube-system pods found
	I1009 20:22:43.555933   63427 system_pods.go:61] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.555941   63427 system_pods.go:61] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.555947   63427 system_pods.go:61] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.555953   63427 system_pods.go:61] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.555957   63427 system_pods.go:61] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.555962   63427 system_pods.go:61] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.555973   63427 system_pods.go:61] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.555982   63427 system_pods.go:61] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.555992   63427 system_pods.go:74] duration metric: took 3.849229039s to wait for pod list to return data ...
	I1009 20:22:43.556003   63427 default_sa.go:34] waiting for default service account to be created ...
	I1009 20:22:43.558563   63427 default_sa.go:45] found service account: "default"
	I1009 20:22:43.558582   63427 default_sa.go:55] duration metric: took 2.571282ms for default service account to be created ...
	I1009 20:22:43.558590   63427 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 20:22:43.563017   63427 system_pods.go:86] 8 kube-system pods found
	I1009 20:22:43.563036   63427 system_pods.go:89] "coredns-7c65d6cfc9-dddm2" [284ba3c4-0972-40f4-97d9-6ed9ce09feac] Running
	I1009 20:22:43.563041   63427 system_pods.go:89] "etcd-no-preload-480205" [f1a9a112-b94a-422b-b77d-3b148501d6af] Running
	I1009 20:22:43.563045   63427 system_pods.go:89] "kube-apiserver-no-preload-480205" [98ad3db7-52fe-411e-adfc-efc4f066d216] Running
	I1009 20:22:43.563049   63427 system_pods.go:89] "kube-controller-manager-no-preload-480205" [8b1fea13-c516-4e88-8ac5-8308d97a5ffc] Running
	I1009 20:22:43.563052   63427 system_pods.go:89] "kube-proxy-vbpbk" [acf61f4e-0d31-4712-9d3e-7baa113b31d9] Running
	I1009 20:22:43.563056   63427 system_pods.go:89] "kube-scheduler-no-preload-480205" [8fba4d64-6e86-4f18-9d15-971e862e57fd] Running
	I1009 20:22:43.563074   63427 system_pods.go:89] "metrics-server-6867b74b74-fhcfl" [5c70178a-2be8-4006-b78b-5c4d45091004] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1009 20:22:43.563082   63427 system_pods.go:89] "storage-provisioner" [d88d60b3-7360-4111-b680-e9e2a38e8775] Running
	I1009 20:22:43.563091   63427 system_pods.go:126] duration metric: took 4.493122ms to wait for k8s-apps to be running ...
	I1009 20:22:43.563101   63427 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 20:22:43.563148   63427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:22:43.579410   63427 system_svc.go:56] duration metric: took 16.301009ms WaitForService to wait for kubelet
	I1009 20:22:43.579435   63427 kubeadm.go:582] duration metric: took 4m22.227803615s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 20:22:43.579456   63427 node_conditions.go:102] verifying NodePressure condition ...
	I1009 20:22:43.582061   63427 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1009 20:22:43.582083   63427 node_conditions.go:123] node cpu capacity is 2
	I1009 20:22:43.582095   63427 node_conditions.go:105] duration metric: took 2.633714ms to run NodePressure ...
	I1009 20:22:43.582108   63427 start.go:241] waiting for startup goroutines ...
	I1009 20:22:43.582118   63427 start.go:246] waiting for cluster config update ...
	I1009 20:22:43.582137   63427 start.go:255] writing updated cluster config ...
	I1009 20:22:43.582415   63427 ssh_runner.go:195] Run: rm -f paused
	I1009 20:22:43.628249   63427 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 20:22:43.630230   63427 out.go:177] * Done! kubectl is now configured to use "no-preload-480205" cluster and "default" namespace by default
	I1009 20:23:45.402502   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:23:45.402618   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:23:45.404210   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:45.404308   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:45.404415   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:45.404554   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:45.404699   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:45.404776   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:45.406561   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:45.406656   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:45.406713   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:45.406832   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:45.406929   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:45.407025   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:45.407132   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:45.407247   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:45.407350   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:45.407466   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:45.407586   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:45.407659   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:45.407756   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:45.407850   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:45.407937   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:45.408016   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:45.408074   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:45.408202   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:45.408335   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:45.408407   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:45.408510   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:45.410040   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:45.410141   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:45.410231   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:45.410330   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:45.410409   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:45.410546   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:23:45.410589   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:23:45.410653   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.410810   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.410872   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411059   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411164   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411367   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411428   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411606   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411674   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:23:45.411825   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:23:45.411832   64287 kubeadm.go:310] 
	I1009 20:23:45.411865   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:23:45.411909   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:23:45.411928   64287 kubeadm.go:310] 
	I1009 20:23:45.411974   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:23:45.412018   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:23:45.412138   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:23:45.412155   64287 kubeadm.go:310] 
	I1009 20:23:45.412300   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:23:45.412344   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:23:45.412393   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:23:45.412400   64287 kubeadm.go:310] 
	I1009 20:23:45.412516   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:23:45.412618   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:23:45.412631   64287 kubeadm.go:310] 
	I1009 20:23:45.412764   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:23:45.412885   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:23:45.412996   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:23:45.413059   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:23:45.413078   64287 kubeadm.go:310] 
	W1009 20:23:45.413176   64287 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1009 20:23:45.413219   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1009 20:23:45.881931   64287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 20:23:45.897391   64287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 20:23:45.907598   64287 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 20:23:45.907621   64287 kubeadm.go:157] found existing configuration files:
	
	I1009 20:23:45.907668   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 20:23:45.917540   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 20:23:45.917585   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 20:23:45.927278   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 20:23:45.937054   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 20:23:45.937109   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 20:23:45.946544   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.956863   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 20:23:45.956901   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 20:23:45.966184   64287 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 20:23:45.975335   64287 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 20:23:45.975385   64287 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 20:23:45.984552   64287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1009 20:23:46.063271   64287 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1009 20:23:46.063380   64287 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 20:23:46.213340   64287 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 20:23:46.213511   64287 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 20:23:46.213652   64287 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 20:23:46.388334   64287 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 20:23:46.390196   64287 out.go:235]   - Generating certificates and keys ...
	I1009 20:23:46.390303   64287 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 20:23:46.390384   64287 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 20:23:46.390499   64287 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1009 20:23:46.390606   64287 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1009 20:23:46.390710   64287 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1009 20:23:46.390799   64287 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1009 20:23:46.390899   64287 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1009 20:23:46.390975   64287 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1009 20:23:46.391097   64287 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1009 20:23:46.391196   64287 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1009 20:23:46.391268   64287 kubeadm.go:310] [certs] Using the existing "sa" key
	I1009 20:23:46.391355   64287 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 20:23:46.513116   64287 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 20:23:46.906952   64287 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 20:23:47.053715   64287 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 20:23:47.184809   64287 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 20:23:47.207139   64287 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 20:23:47.208338   64287 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 20:23:47.208424   64287 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 20:23:47.362764   64287 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 20:23:47.364703   64287 out.go:235]   - Booting up control plane ...
	I1009 20:23:47.364823   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 20:23:47.377925   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 20:23:47.379842   64287 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 20:23:47.380533   64287 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 20:23:47.382819   64287 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 20:24:27.385438   64287 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1009 20:24:27.385546   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:27.385726   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:32.386071   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:32.386268   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:24:42.386802   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:24:42.386979   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:02.388082   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:02.388300   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.388787   64287 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1009 20:25:42.389021   64287 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1009 20:25:42.389080   64287 kubeadm.go:310] 
	I1009 20:25:42.389329   64287 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1009 20:25:42.389524   64287 kubeadm.go:310] 		timed out waiting for the condition
	I1009 20:25:42.389545   64287 kubeadm.go:310] 
	I1009 20:25:42.389625   64287 kubeadm.go:310] 	This error is likely caused by:
	I1009 20:25:42.389680   64287 kubeadm.go:310] 		- The kubelet is not running
	I1009 20:25:42.389832   64287 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1009 20:25:42.389846   64287 kubeadm.go:310] 
	I1009 20:25:42.389963   64287 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1009 20:25:42.390019   64287 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1009 20:25:42.390066   64287 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1009 20:25:42.390081   64287 kubeadm.go:310] 
	I1009 20:25:42.390201   64287 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1009 20:25:42.390312   64287 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1009 20:25:42.390321   64287 kubeadm.go:310] 
	I1009 20:25:42.390438   64287 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1009 20:25:42.390550   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1009 20:25:42.390671   64287 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1009 20:25:42.390779   64287 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1009 20:25:42.390791   64287 kubeadm.go:310] 
	I1009 20:25:42.391382   64287 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 20:25:42.391507   64287 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1009 20:25:42.391606   64287 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1009 20:25:42.391673   64287 kubeadm.go:394] duration metric: took 7m57.392748571s to StartCluster
	I1009 20:25:42.391719   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 20:25:42.391785   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 20:25:42.439581   64287 cri.go:89] found id: ""
	I1009 20:25:42.439610   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.439621   64287 logs.go:284] No container was found matching "kube-apiserver"
	I1009 20:25:42.439628   64287 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 20:25:42.439695   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 20:25:42.476205   64287 cri.go:89] found id: ""
	I1009 20:25:42.476231   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.476238   64287 logs.go:284] No container was found matching "etcd"
	I1009 20:25:42.476243   64287 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 20:25:42.476297   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 20:25:42.528317   64287 cri.go:89] found id: ""
	I1009 20:25:42.528342   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.528350   64287 logs.go:284] No container was found matching "coredns"
	I1009 20:25:42.528356   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 20:25:42.528413   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 20:25:42.564857   64287 cri.go:89] found id: ""
	I1009 20:25:42.564885   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.564893   64287 logs.go:284] No container was found matching "kube-scheduler"
	I1009 20:25:42.564899   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 20:25:42.564956   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 20:25:42.600053   64287 cri.go:89] found id: ""
	I1009 20:25:42.600081   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.600088   64287 logs.go:284] No container was found matching "kube-proxy"
	I1009 20:25:42.600094   64287 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 20:25:42.600146   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 20:25:42.636997   64287 cri.go:89] found id: ""
	I1009 20:25:42.637026   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.637034   64287 logs.go:284] No container was found matching "kube-controller-manager"
	I1009 20:25:42.637047   64287 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 20:25:42.637107   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 20:25:42.672228   64287 cri.go:89] found id: ""
	I1009 20:25:42.672255   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.672266   64287 logs.go:284] No container was found matching "kindnet"
	I1009 20:25:42.672273   64287 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1009 20:25:42.672331   64287 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1009 20:25:42.711696   64287 cri.go:89] found id: ""
	I1009 20:25:42.711727   64287 logs.go:282] 0 containers: []
	W1009 20:25:42.711737   64287 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1009 20:25:42.711749   64287 logs.go:123] Gathering logs for kubelet ...
	I1009 20:25:42.711764   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 20:25:42.764839   64287 logs.go:123] Gathering logs for dmesg ...
	I1009 20:25:42.764876   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 20:25:42.778484   64287 logs.go:123] Gathering logs for describe nodes ...
	I1009 20:25:42.778512   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1009 20:25:42.864830   64287 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1009 20:25:42.864859   64287 logs.go:123] Gathering logs for CRI-O ...
	I1009 20:25:42.864874   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 20:25:42.975355   64287 logs.go:123] Gathering logs for container status ...
	I1009 20:25:42.975389   64287 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1009 20:25:43.015247   64287 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1009 20:25:43.015307   64287 out.go:270] * 
	W1009 20:25:43.015375   64287 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.015392   64287 out.go:270] * 
	W1009 20:25:43.016664   64287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 20:25:43.020135   64287 out.go:201] 
	W1009 20:25:43.021388   64287 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1009 20:25:43.021427   64287 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1009 20:25:43.021453   64287 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1009 20:25:43.022804   64287 out.go:201] 
	
	
	==> CRI-O <==
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.702585199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506216702552464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fe9c722-5c5f-41e9-91db-9498ed6c3bcc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.703101185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8376bcf-1bc6-4f23-b2e2-178be15e4d1f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.703156017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8376bcf-1bc6-4f23-b2e2-178be15e4d1f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.703185689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a8376bcf-1bc6-4f23-b2e2-178be15e4d1f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.734361129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8951674e-fa09-4181-ad5b-dc1e66586de0 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.734431865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8951674e-fa09-4181-ad5b-dc1e66586de0 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.735661986Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7122d8d4-daca-46f8-8a22-ed5a0424f6c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.736059217Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506216736036612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7122d8d4-daca-46f8-8a22-ed5a0424f6c3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.736525626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb7cae74-70d1-46bc-a9e7-0ee2e0daa2af name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.736578980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb7cae74-70d1-46bc-a9e7-0ee2e0daa2af name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.736613408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bb7cae74-70d1-46bc-a9e7-0ee2e0daa2af name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.767369328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f45a80ba-ff83-4f63-9201-0f300864e5e2 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.767477851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f45a80ba-ff83-4f63-9201-0f300864e5e2 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.768424992Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a49dc46b-4993-49b3-9445-8b3438e68c8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.768937607Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506216768906200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a49dc46b-4993-49b3-9445-8b3438e68c8f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.769488659Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf2ee982-e430-49c4-921e-e68b0b26730c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.769557648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf2ee982-e430-49c4-921e-e68b0b26730c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.769600284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cf2ee982-e430-49c4-921e-e68b0b26730c name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.800282145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=114b8497-dd53-4ce5-b48d-cc2503154d75 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.800366826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=114b8497-dd53-4ce5-b48d-cc2503154d75 name=/runtime.v1.RuntimeService/Version
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.801663919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d3aa40b-cba5-47f1-aad3-2042afe7e5c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.802128684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728506216802097436,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d3aa40b-cba5-47f1-aad3-2042afe7e5c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.802871816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e2130e1-a9b0-4675-a52b-d5b04380148f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.802954621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e2130e1-a9b0-4675-a52b-d5b04380148f name=/runtime.v1.RuntimeService/ListContainers
	Oct 09 20:36:56 old-k8s-version-169021 crio[636]: time="2024-10-09 20:36:56.802991167Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9e2130e1-a9b0-4675-a52b-d5b04380148f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 9 20:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051476] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041758] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.042560] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.485695] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.635002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +11.304560] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.057777] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.071040] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.192125] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.124687] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.295888] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +6.664222] systemd-fstab-generator[887]: Ignoring "noauto" option for root device
	[  +0.065570] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.848518] systemd-fstab-generator[1010]: Ignoring "noauto" option for root device
	[  +8.732358] kauditd_printk_skb: 46 callbacks suppressed
	[Oct 9 20:21] systemd-fstab-generator[5090]: Ignoring "noauto" option for root device
	[Oct 9 20:23] systemd-fstab-generator[5366]: Ignoring "noauto" option for root device
	[  +0.064209] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 20:36:56 up 19 min,  0 users,  load average: 0.16, 0.04, 0.01
	Linux old-k8s-version-169021 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009c0c0, 0xc000b83050)
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]: goroutine 169 [select]:
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cebef0, 0x4f0ac20, 0xc000050ff0, 0x1, 0xc00009c0c0)
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000c7e0e0, 0xc00009c0c0)
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000866700, 0xc000b9ae80)
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 09 20:36:55 old-k8s-version-169021 kubelet[6836]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 09 20:36:55 old-k8s-version-169021 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 09 20:36:55 old-k8s-version-169021 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 09 20:36:56 old-k8s-version-169021 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 137.
	Oct 09 20:36:56 old-k8s-version-169021 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 09 20:36:56 old-k8s-version-169021 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 09 20:36:56 old-k8s-version-169021 kubelet[6863]: I1009 20:36:56.242855    6863 server.go:416] Version: v1.20.0
	Oct 09 20:36:56 old-k8s-version-169021 kubelet[6863]: I1009 20:36:56.243356    6863 server.go:837] Client rotation is on, will bootstrap in background
	Oct 09 20:36:56 old-k8s-version-169021 kubelet[6863]: I1009 20:36:56.245866    6863 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 09 20:36:56 old-k8s-version-169021 kubelet[6863]: W1009 20:36:56.247580    6863 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 09 20:36:56 old-k8s-version-169021 kubelet[6863]: I1009 20:36:56.247924    6863 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 2 (221.38524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-169021" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (128.68s)

                                                
                                    

Test pass (242/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 45.4
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 20.92
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 78.92
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 134.41
31 TestAddons/serial/GCPAuth/Namespaces 0.14
34 TestAddons/parallel/Registry 18.58
36 TestAddons/parallel/InspektorGadget 10.84
39 TestAddons/parallel/CSI 67.78
40 TestAddons/parallel/Headlamp 22.8
41 TestAddons/parallel/CloudSpanner 5.69
42 TestAddons/parallel/LocalPath 59.23
43 TestAddons/parallel/NvidiaDevicePlugin 5.96
44 TestAddons/parallel/Yakd 11.83
46 TestCertOptions 67.12
47 TestCertExpiration 301.56
49 TestForceSystemdFlag 45.31
50 TestForceSystemdEnv 69.1
52 TestKVMDriverInstallOrUpdate 4.63
56 TestErrorSpam/setup 39.36
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.54
60 TestErrorSpam/unpause 1.81
61 TestErrorSpam/stop 4.97
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 85.29
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.18
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.67
73 TestFunctional/serial/CacheCmd/cache/add_local 2.23
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.69
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 32.53
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.39
84 TestFunctional/serial/LogsFileCmd 1.35
85 TestFunctional/serial/InvalidService 4.61
87 TestFunctional/parallel/ConfigCmd 0.35
88 TestFunctional/parallel/DashboardCmd 15.31
89 TestFunctional/parallel/DryRun 0.33
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 1.12
95 TestFunctional/parallel/ServiceCmdConnect 11.61
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 49.35
99 TestFunctional/parallel/SSHCmd 0.45
100 TestFunctional/parallel/CpCmd 1.45
101 TestFunctional/parallel/MySQL 31.97
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.52
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
111 TestFunctional/parallel/License 0.64
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.19
122 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
123 TestFunctional/parallel/ProfileCmd/profile_list 0.34
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
125 TestFunctional/parallel/MountCmd/any-port 8.55
126 TestFunctional/parallel/ServiceCmd/List 0.29
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
129 TestFunctional/parallel/MountCmd/specific-port 1.75
130 TestFunctional/parallel/ServiceCmd/Format 0.41
131 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/Version/short 0.04
133 TestFunctional/parallel/Version/components 0.47
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
138 TestFunctional/parallel/ImageCommands/ImageBuild 4.49
139 TestFunctional/parallel/ImageCommands/Setup 2.11
140 TestFunctional/parallel/MountCmd/VerifyCleanup 1.44
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.03
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.95
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.77
148 TestFunctional/parallel/ImageCommands/ImageRemove 2.74
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.92
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.58
151 TestFunctional/delete_echo-server_images 0.03
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 199.03
158 TestMultiControlPlane/serial/DeployApp 7.23
159 TestMultiControlPlane/serial/PingHostFromPods 1.15
160 TestMultiControlPlane/serial/AddWorkerNode 56.36
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
163 TestMultiControlPlane/serial/CopyFile 12.6
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.65
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.59
172 TestMultiControlPlane/serial/RestartCluster 357.41
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
174 TestMultiControlPlane/serial/AddSecondaryNode 80.73
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
179 TestJSONOutput/start/Command 80.13
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.73
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.63
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.32
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 87.66
211 TestMountStart/serial/StartWithMountFirst 28.5
212 TestMountStart/serial/VerifyMountFirst 0.39
213 TestMountStart/serial/StartWithMountSecond 25.15
214 TestMountStart/serial/VerifyMountSecond 0.37
215 TestMountStart/serial/DeleteFirst 0.7
216 TestMountStart/serial/VerifyMountPostDelete 0.37
217 TestMountStart/serial/Stop 1.28
218 TestMountStart/serial/RestartStopped 25.02
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 113.38
223 TestMultiNode/serial/DeployApp2Nodes 7.06
224 TestMultiNode/serial/PingHostFrom2Pods 0.77
225 TestMultiNode/serial/AddNode 49.58
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.57
228 TestMultiNode/serial/CopyFile 7.11
229 TestMultiNode/serial/StopNode 2.37
230 TestMultiNode/serial/StartAfterStop 40.46
232 TestMultiNode/serial/DeleteNode 2
234 TestMultiNode/serial/RestartMultiNode 182.5
235 TestMultiNode/serial/ValidateNameConflict 44.07
242 TestScheduledStopUnix 115.54
246 TestRunningBinaryUpgrade 209.64
250 TestStoppedBinaryUpgrade/Setup 2.8
251 TestStoppedBinaryUpgrade/Upgrade 165.51
260 TestPause/serial/Start 102.63
261 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
270 TestNetworkPlugins/group/false 3
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
276 TestNoKubernetes/serial/StartWithK8s 53.43
277 TestNoKubernetes/serial/StartWithStopK8s 41.51
278 TestNoKubernetes/serial/Start 28.3
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
282 TestNoKubernetes/serial/ProfileList 11.87
283 TestNoKubernetes/serial/Stop 1.3
284 TestNoKubernetes/serial/StartNoArgs 32.63
286 TestStartStop/group/no-preload/serial/FirstStart 97.65
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
289 TestStartStop/group/embed-certs/serial/FirstStart 111.83
290 TestStartStop/group/no-preload/serial/DeployApp 10.28
291 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.94
295 TestStartStop/group/embed-certs/serial/DeployApp 11.44
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.26
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
304 TestStartStop/group/no-preload/serial/SecondStart 650.89
306 TestStartStop/group/embed-certs/serial/SecondStart 569
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 531.28
309 TestStartStop/group/old-k8s-version/serial/Stop 3.38
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
321 TestStartStop/group/newest-cni/serial/FirstStart 47.55
322 TestNetworkPlugins/group/auto/Start 57.06
323 TestStartStop/group/newest-cni/serial/DeployApp 0
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.2
325 TestStartStop/group/newest-cni/serial/Stop 10.59
326 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
327 TestStartStop/group/newest-cni/serial/SecondStart 41.18
328 TestNetworkPlugins/group/kindnet/Start 82.3
329 TestNetworkPlugins/group/auto/KubeletFlags 0.24
330 TestNetworkPlugins/group/auto/NetCatPod 11.33
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
334 TestStartStop/group/newest-cni/serial/Pause 4.42
335 TestNetworkPlugins/group/auto/DNS 16.49
336 TestNetworkPlugins/group/calico/Start 90.22
337 TestNetworkPlugins/group/auto/Localhost 0.14
338 TestNetworkPlugins/group/auto/HairPin 0.13
339 TestNetworkPlugins/group/custom-flannel/Start 76.76
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/enable-default-cni/Start 72.39
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
343 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
344 TestNetworkPlugins/group/kindnet/DNS 0.22
345 TestNetworkPlugins/group/kindnet/Localhost 0.22
346 TestNetworkPlugins/group/kindnet/HairPin 0.15
347 TestNetworkPlugins/group/flannel/Start 85.93
348 TestNetworkPlugins/group/calico/ControllerPod 6.01
349 TestNetworkPlugins/group/calico/KubeletFlags 0.29
350 TestNetworkPlugins/group/calico/NetCatPod 12.31
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
353 TestNetworkPlugins/group/calico/DNS 0.22
354 TestNetworkPlugins/group/calico/Localhost 0.18
355 TestNetworkPlugins/group/calico/HairPin 0.14
356 TestNetworkPlugins/group/custom-flannel/DNS 0.18
357 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
358 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
359 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
360 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.42
361 TestNetworkPlugins/group/bridge/Start 86.11
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
363 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
364 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
365 TestNetworkPlugins/group/flannel/ControllerPod 6.01
366 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
367 TestNetworkPlugins/group/flannel/NetCatPod 10.23
368 TestNetworkPlugins/group/flannel/DNS 0.15
369 TestNetworkPlugins/group/flannel/Localhost 0.13
370 TestNetworkPlugins/group/flannel/HairPin 0.13
371 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
372 TestNetworkPlugins/group/bridge/NetCatPod 11.24
373 TestNetworkPlugins/group/bridge/DNS 0.15
374 TestNetworkPlugins/group/bridge/Localhost 0.11
375 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (45.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-944932 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-944932 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (45.400177573s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (45.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1009 18:47:14.456512   16607 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1009 18:47:14.456613   16607 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-944932
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-944932: exit status 85 (60.53013ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-944932 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |          |
	|         | -p download-only-944932        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:29.104550   16618 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:29.104656   16618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:29.104666   16618 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:29.104671   16618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:29.104852   16618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	W1009 18:46:29.105028   16618 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19780-9412/.minikube/config/config.json: open /home/jenkins/minikube-integration/19780-9412/.minikube/config/config.json: no such file or directory
	I1009 18:46:29.105645   16618 out.go:352] Setting JSON to true
	I1009 18:46:29.106510   16618 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1730,"bootTime":1728497859,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:46:29.106617   16618 start.go:139] virtualization: kvm guest
	I1009 18:46:29.108938   16618 out.go:97] [download-only-944932] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1009 18:46:29.109035   16618 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 18:46:29.109071   16618 notify.go:220] Checking for updates...
	I1009 18:46:29.110385   16618 out.go:169] MINIKUBE_LOCATION=19780
	I1009 18:46:29.111575   16618 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:29.112638   16618 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 18:46:29.113753   16618 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 18:46:29.114775   16618 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 18:46:29.116796   16618 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:46:29.116998   16618 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:29.215887   16618 out.go:97] Using the kvm2 driver based on user configuration
	I1009 18:46:29.215921   16618 start.go:297] selected driver: kvm2
	I1009 18:46:29.215929   16618 start.go:901] validating driver "kvm2" against <nil>
	I1009 18:46:29.216281   16618 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:46:29.216398   16618 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:46:29.230608   16618 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 18:46:29.230648   16618 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:29.231215   16618 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1009 18:46:29.231369   16618 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:46:29.231397   16618 cni.go:84] Creating CNI manager for ""
	I1009 18:46:29.231441   16618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:46:29.231450   16618 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:29.231496   16618 start.go:340] cluster config:
	{Name:download-only-944932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-944932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:29.231657   16618 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:46:29.233266   16618 out.go:97] Downloading VM boot image ...
	I1009 18:46:29.233313   16618 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1009 18:46:46.484007   16618 out.go:97] Starting "download-only-944932" primary control-plane node in "download-only-944932" cluster
	I1009 18:46:46.484039   16618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 18:46:46.591859   16618 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:46:46.591900   16618 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:46.592073   16618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 18:46:46.593890   16618 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1009 18:46:46.593922   16618 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1009 18:46:46.714435   16618 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:47:12.046268   16618 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1009 18:47:12.046382   16618 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1009 18:47:12.949828   16618 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1009 18:47:12.950199   16618 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/download-only-944932/config.json ...
	I1009 18:47:12.950236   16618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/download-only-944932/config.json: {Name:mk375adf04f1eafa2308245c810a97810b8691f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:12.950403   16618 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 18:47:12.950603   16618 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-944932 host does not exist
	  To start a cluster, run: "minikube start -p download-only-944932"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-944932
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (20.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-988518 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-988518 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (20.915072902s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (20.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1009 18:47:35.684401   16607 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1009 18:47:35.684434   16607 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-988518
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-988518: exit status 85 (60.910952ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-944932 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-944932        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| delete  | -p download-only-944932        | download-only-944932 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC | 09 Oct 24 18:47 UTC |
	| start   | -o=json --download-only        | download-only-988518 | jenkins | v1.34.0 | 09 Oct 24 18:47 UTC |                     |
	|         | -p download-only-988518        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:47:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:47:14.808650   16941 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:47:14.808774   16941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:14.808783   16941 out.go:358] Setting ErrFile to fd 2...
	I1009 18:47:14.808787   16941 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:47:14.808965   16941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 18:47:14.809508   16941 out.go:352] Setting JSON to true
	I1009 18:47:14.810298   16941 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1776,"bootTime":1728497859,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:47:14.810389   16941 start.go:139] virtualization: kvm guest
	I1009 18:47:14.812508   16941 out.go:97] [download-only-988518] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 18:47:14.812648   16941 notify.go:220] Checking for updates...
	I1009 18:47:14.813989   16941 out.go:169] MINIKUBE_LOCATION=19780
	I1009 18:47:14.815400   16941 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:47:14.816933   16941 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 18:47:14.818423   16941 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 18:47:14.819812   16941 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 18:47:14.822379   16941 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:47:14.822569   16941 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:47:14.853772   16941 out.go:97] Using the kvm2 driver based on user configuration
	I1009 18:47:14.853793   16941 start.go:297] selected driver: kvm2
	I1009 18:47:14.853799   16941 start.go:901] validating driver "kvm2" against <nil>
	I1009 18:47:14.854164   16941 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:47:14.854279   16941 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19780-9412/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1009 18:47:14.868372   16941 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1009 18:47:14.868415   16941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:47:14.868921   16941 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1009 18:47:14.869087   16941 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:47:14.869116   16941 cni.go:84] Creating CNI manager for ""
	I1009 18:47:14.869169   16941 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1009 18:47:14.869183   16941 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1009 18:47:14.869239   16941 start.go:340] cluster config:
	{Name:download-only-988518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-988518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:47:14.869356   16941 iso.go:125] acquiring lock: {Name:mk2688815dbebddd55ac0027aba8e2703463a0a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:47:14.871124   16941 out.go:97] Starting "download-only-988518" primary control-plane node in "download-only-988518" cluster
	I1009 18:47:14.871151   16941 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:15.471415   16941 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:47:15.471444   16941 cache.go:56] Caching tarball of preloaded images
	I1009 18:47:15.471601   16941 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:15.473362   16941 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1009 18:47:15.473378   16941 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1009 18:47:15.589418   16941 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:47:33.894244   16941 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1009 18:47:33.894329   16941 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19780-9412/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1009 18:47:34.624687   16941 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 18:47:34.625049   16941 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/download-only-988518/config.json ...
	I1009 18:47:34.625089   16941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/download-only-988518/config.json: {Name:mk7cd4da1c264a48158369e3f1d96b4971e42090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:34.625261   16941 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:47:34.625411   16941 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19780-9412/.minikube/cache/linux/amd64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-988518 host does not exist
	  To start a cluster, run: "minikube start -p download-only-988518"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-988518
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 18:47:36.241956   16607 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-505183 --alsologtostderr --binary-mirror http://127.0.0.1:43333 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-505183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-505183
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (78.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-035060 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-035060 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m17.853008314s)
helpers_test.go:175: Cleaning up "offline-crio-035060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-035060
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-035060: (1.065153996s)
--- PASS: TestOffline (78.92s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-421083
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-421083: exit status 85 (51.084425ms)

                                                
                                                
-- stdout --
	* Profile "addons-421083" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-421083"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-421083
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-421083: exit status 85 (50.18474ms)

                                                
                                                
-- stdout --
	* Profile "addons-421083" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-421083"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (134.41s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-421083 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-421083 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m14.414316298s)
--- PASS: TestAddons/Setup (134.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-421083 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-421083 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.132776ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-f92jv" [98955600-7b10-44b3-ac78-eff396b2c4ba] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003863785s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x986l" [f7e67133-eaf2-4276-8331-d8dd8cbf0c4a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003740926s
addons_test.go:331: (dbg) Run:  kubectl --context addons-421083 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-421083 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-421083 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.837439775s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 ip
2024/10/09 18:58:20 [DEBUG] GET http://192.168.39.156:5000
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.58s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gltrx" [6d3c1a89-6093-4d48-9a58-ff71c7a28b5c] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004931467s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 addons disable inspektor-gadget --alsologtostderr -v=1: (5.829291814s)
--- PASS: TestAddons/parallel/InspektorGadget (10.84s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1009 18:58:21.411543   16607 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1009 18:58:21.419120   16607 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 18:58:21.419142   16607 kapi.go:107] duration metric: took 7.61194ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.619538ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-421083 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-421083 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d9c15b69-31ae-48a3-9873-666bb1e74271] Pending
helpers_test.go:344: "task-pv-pod" [d9c15b69-31ae-48a3-9873-666bb1e74271] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d9c15b69-31ae-48a3-9873-666bb1e74271] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.004351138s
addons_test.go:511: (dbg) Run:  kubectl --context addons-421083 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-421083 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-421083 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-421083 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-421083 delete pod task-pv-pod: (1.270180224s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-421083 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-421083 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-421083 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [494eeecd-f063-4c23-bc72-bbb7e8a13218] Pending
helpers_test.go:344: "task-pv-pod-restore" [494eeecd-f063-4c23-bc72-bbb7e8a13218] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [494eeecd-f063-4c23-bc72-bbb7e8a13218] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004338328s
addons_test.go:553: (dbg) Run:  kubectl --context addons-421083 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-421083 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-421083 delete volumesnapshot new-snapshot-demo
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.861836652s)
--- PASS: TestAddons/parallel/CSI (67.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-421083 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-421083 --alsologtostderr -v=1: (1.070484381s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-9vzmj" [d168ed6d-f646-480c-adfb-8b004d88f18c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-9vzmj" [d168ed6d-f646-480c-adfb-8b004d88f18c] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004735523s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable headlamp --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 addons disable headlamp --alsologtostderr -v=1: (5.724150205s)
--- PASS: TestAddons/parallel/Headlamp (22.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-v4xvb" [75da21c7-6538-43ee-b788-5dd526c6149f] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005438395s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-421083 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-421083 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [178c1169-e7b0-48c2-811b-667773b90b25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [178c1169-e7b0-48c2-811b-667773b90b25] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [178c1169-e7b0-48c2-811b-667773b90b25] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 8.004469292s
addons_test.go:902: (dbg) Run:  kubectl --context addons-421083 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 ssh "cat /opt/local-path-provisioner/pvc-e5d4b64b-252d-4269-93cd-d7941b14a023_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-421083 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-421083 delete pvc test-pvc
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.429824983s)
--- PASS: TestAddons/parallel/LocalPath (59.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4k6f6" [c45cd383-1866-4787-a24e-bac7c6eb0863] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.008876778s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.96s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-vpmhq" [fe4d4ee8-3bb0-4b0e-a5d0-d92fd49967ed] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005420711s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable yakd --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-421083 addons disable yakd --alsologtostderr -v=1: (5.820353621s)
--- PASS: TestAddons/parallel/Yakd (11.83s)

                                                
                                    
x
+
TestCertOptions (67.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-744883 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1009 20:04:34.683494   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:04:51.612856   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:04:51.908506   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-744883 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m5.693575384s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-744883 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-744883 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-744883 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-744883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-744883
--- PASS: TestCertOptions (67.12s)

                                                
                                    
x
+
TestCertExpiration (301.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-261596 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-261596 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (42.393364199s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-261596 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-261596 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m18.37835214s)
helpers_test.go:175: Cleaning up "cert-expiration-261596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-261596
--- PASS: TestCertExpiration (301.56s)

                                                
                                    
x
+
TestForceSystemdFlag (45.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-499844 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-499844 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.069248075s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-499844 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-499844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-499844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-499844: (1.047895144s)
--- PASS: TestForceSystemdFlag (45.31s)

                                                
                                    
x
+
TestForceSystemdEnv (69.1s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-876990 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-876990 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.318959152s)
helpers_test.go:175: Cleaning up "force-systemd-env-876990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-876990
--- PASS: TestForceSystemdEnv (69.10s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.63s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1009 20:05:41.376960   16607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 20:05:41.377086   16607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1009 20:05:41.403826   16607 install.go:62] docker-machine-driver-kvm2: exit status 1
W1009 20:05:41.404142   16607 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1009 20:05:41.404191   16607 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1912543329/001/docker-machine-driver-kvm2
I1009 20:05:41.683840   16607 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1912543329/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80] Decompressors:map[bz2:0xc000121460 gz:0xc000121468 tar:0xc000121410 tar.bz2:0xc000121420 tar.gz:0xc000121430 tar.xz:0xc000121440 tar.zst:0xc000121450 tbz2:0xc000121420 tgz:0xc000121430 txz:0xc000121440 tzst:0xc000121450 xz:0xc000121470 zip:0xc000121480 zst:0xc000121478] Getters:map[file:0xc00209c630 http:0xc00089e280 https:0xc00089e2d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1009 20:05:41.683880   16607 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1912543329/001/docker-machine-driver-kvm2
I1009 20:05:44.025605   16607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 20:05:44.025680   16607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1009 20:05:44.052232   16607 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1009 20:05:44.052261   16607 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1009 20:05:44.052317   16607 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1009 20:05:44.052339   16607 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1912543329/002/docker-machine-driver-kvm2
I1009 20:05:44.107213   16607 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate1912543329/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80] Decompressors:map[bz2:0xc000121460 gz:0xc000121468 tar:0xc000121410 tar.bz2:0xc000121420 tar.gz:0xc000121430 tar.xz:0xc000121440 tar.zst:0xc000121450 tbz2:0xc000121420 tgz:0xc000121430 txz:0xc000121440 tzst:0xc000121450 xz:0xc000121470 zip:0xc000121480 zst:0xc000121478] Getters:map[file:0xc00209c3d0 http:0xc0009a8730 https:0xc0009a8780] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1009 20:05:44.107247   16607 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1912543329/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.63s)

                                                
                                    
x
+
TestErrorSpam/setup (39.36s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-720202 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-720202 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-720202 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-720202 --driver=kvm2  --container-runtime=crio: (39.364394839s)
--- PASS: TestErrorSpam/setup (39.36s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (4.97s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 stop: (2.344664936s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 stop: (1.143607044s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-720202 --log_dir /tmp/nospam-720202 stop: (1.483994787s)
--- PASS: TestErrorSpam/stop (4.97s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19780-9412/.minikube/files/etc/test/nested/copy/16607/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179337 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-179337 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m25.288079743s)
--- PASS: TestFunctional/serial/StartWithProxy (85.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 19:08:24.978115   16607 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179337 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-179337 --alsologtostderr -v=8: (38.176159758s)
functional_test.go:663: soft start took 38.176833356s for "functional-179337" cluster.
I1009 19:09:03.154660   16607 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (38.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-179337 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 cache add registry.k8s.io/pause:3.1: (1.138022832s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 cache add registry.k8s.io/pause:3.3: (1.302476764s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 cache add registry.k8s.io/pause:latest: (1.224678892s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-179337 /tmp/TestFunctionalserialCacheCmdcacheadd_local1027962007/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cache add minikube-local-cache-test:functional-179337
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 cache add minikube-local-cache-test:functional-179337: (1.919007464s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cache delete minikube-local-cache-test:functional-179337
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-179337
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (204.654627ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 cache reload: (1.01672259s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 kubectl -- --context functional-179337 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-179337 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179337 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-179337 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.533966768s)
functional_test.go:761: restart took 32.534071789s for "functional-179337" cluster.
I1009 19:09:44.003172   16607 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (32.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-179337 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 logs: (1.391284444s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 logs --file /tmp/TestFunctionalserialLogsFileCmd1120761853/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 logs --file /tmp/TestFunctionalserialLogsFileCmd1120761853/001/logs.txt: (1.353873105s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-179337 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-179337
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-179337: exit status 115 (266.875399ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.56:30709 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-179337 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-179337 delete -f testdata/invalidsvc.yaml: (1.142525518s)
--- PASS: TestFunctional/serial/InvalidService (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 config get cpus: exit status 14 (58.439747ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 config get cpus: exit status 14 (47.087019ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-179337 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-179337 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 27594: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179337 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-179337 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (186.742444ms)

                                                
                                                
-- stdout --
	* [functional-179337] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:10:03.667590   26872 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:10:03.667730   26872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:03.667742   26872 out.go:358] Setting ErrFile to fd 2...
	I1009 19:10:03.667748   26872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:03.668027   26872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:10:03.668753   26872 out.go:352] Setting JSON to false
	I1009 19:10:03.669986   26872 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3145,"bootTime":1728497859,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:10:03.670110   26872 start.go:139] virtualization: kvm guest
	I1009 19:10:03.672460   26872 out.go:177] * [functional-179337] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:10:03.673882   26872 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:10:03.673882   26872 notify.go:220] Checking for updates...
	I1009 19:10:03.675255   26872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:10:03.676932   26872 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:10:03.678534   26872 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:03.679666   26872 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:10:03.681112   26872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:10:03.682756   26872 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:10:03.683229   26872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:03.683310   26872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:03.703449   26872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36353
	I1009 19:10:03.704040   26872 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:03.704860   26872 main.go:141] libmachine: Using API Version  1
	I1009 19:10:03.704892   26872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:03.705362   26872 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:03.705637   26872 main.go:141] libmachine: (functional-179337) Calling .DriverName
	I1009 19:10:03.705934   26872 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:10:03.706345   26872 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:03.706394   26872 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:03.728466   26872 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1009 19:10:03.728994   26872 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:03.729533   26872 main.go:141] libmachine: Using API Version  1
	I1009 19:10:03.729554   26872 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:03.729879   26872 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:03.730050   26872 main.go:141] libmachine: (functional-179337) Calling .DriverName
	I1009 19:10:03.791459   26872 out.go:177] * Using the kvm2 driver based on existing profile
	I1009 19:10:03.792873   26872 start.go:297] selected driver: kvm2
	I1009 19:10:03.792893   26872 start.go:901] validating driver "kvm2" against &{Name:functional-179337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-179337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:03.793033   26872 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:10:03.795557   26872 out.go:201] 
	W1009 19:10:03.796684   26872 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 19:10:03.798426   26872 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179337 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-179337 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-179337 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.527629ms)

                                                
                                                
-- stdout --
	* [functional-179337] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:10:03.999712   26999 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:10:03.999818   26999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:03.999827   26999 out.go:358] Setting ErrFile to fd 2...
	I1009 19:10:03.999831   26999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:10:04.000083   26999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:10:04.000643   26999 out.go:352] Setting JSON to false
	I1009 19:10:04.001600   26999 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3145,"bootTime":1728497859,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:10:04.001686   26999 start.go:139] virtualization: kvm guest
	I1009 19:10:04.003864   26999 out.go:177] * [functional-179337] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1009 19:10:04.005330   26999 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:10:04.005386   26999 notify.go:220] Checking for updates...
	I1009 19:10:04.007596   26999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:10:04.008589   26999 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 19:10:04.009660   26999 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 19:10:04.010845   26999 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:10:04.012126   26999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:10:04.013968   26999 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:10:04.014571   26999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:04.014646   26999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:04.030478   26999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45095
	I1009 19:10:04.031069   26999 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:04.031650   26999 main.go:141] libmachine: Using API Version  1
	I1009 19:10:04.031672   26999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:04.031976   26999 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:04.032148   26999 main.go:141] libmachine: (functional-179337) Calling .DriverName
	I1009 19:10:04.032349   26999 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:10:04.032686   26999 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:10:04.032728   26999 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:10:04.047823   26999 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36985
	I1009 19:10:04.048230   26999 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:10:04.048744   26999 main.go:141] libmachine: Using API Version  1
	I1009 19:10:04.048770   26999 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:10:04.049214   26999 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:10:04.049407   26999 main.go:141] libmachine: (functional-179337) Calling .DriverName
	I1009 19:10:04.082644   26999 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1009 19:10:04.083888   26999 start.go:297] selected driver: kvm2
	I1009 19:10:04.083918   26999 start.go:901] validating driver "kvm2" against &{Name:functional-179337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-179337 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.56 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:10:04.084035   26999 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:10:04.086685   26999 out.go:201] 
	W1009 19:10:04.088000   26999 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 19:10:04.089641   26999 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 status
E1009 19:09:53.197531   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-179337 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-179337 expose deployment hello-node-connect --type=NodePort --port=8080
E1009 19:09:52.072387   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4ktfc" [24ba80cf-e88c-45d1-9f06-ef9b134d8d26] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E1009 19:09:52.234551   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4ktfc" [24ba80cf-e88c-45d1-9f06-ef9b134d8d26] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003177604s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.56:31478
functional_test.go:1675: http://192.168.39.56:31478: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4ktfc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.56:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.56:31478
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 addons list -o json
E1009 19:09:51.949498   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:09:51.990914   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (49.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6483ca31-b9ab-44fc-8bc7-c8e3a3f6043d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004413383s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-179337 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-179337 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-179337 get pvc myclaim -o=json
E1009 19:09:57.041033   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-179337 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fef61ab7-c4ee-4107-8855-29fc81f45ae5] Pending
helpers_test.go:344: "sp-pod" [fef61ab7-c4ee-4107-8855-29fc81f45ae5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fef61ab7-c4ee-4107-8855-29fc81f45ae5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.004984902s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-179337 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-179337 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-179337 delete -f testdata/storage-provisioner/pod.yaml: (1.002417657s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-179337 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [87ccf33c-625d-42d3-8aae-6af4a348666d] Pending
helpers_test.go:344: "sp-pod" [87ccf33c-625d-42d3-8aae-6af4a348666d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [87ccf33c-625d-42d3-8aae-6af4a348666d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 24.004587191s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-179337 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (49.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh -n functional-179337 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cp functional-179337:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3503062843/001/cp-test.txt
E1009 19:09:51.909227   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:09:51.915748   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:09:51.927408   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh -n functional-179337 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh -n functional-179337 "sudo cat /tmp/does/not/exist/cp-test.txt"
E1009 19:09:52.556086   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-179337 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-dlztf" [8eea8159-2b8c-4bde-89f4-b0559277a3f4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-dlztf" [8eea8159-2b8c-4bde-89f4-b0559277a3f4] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.004866493s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-179337 exec mysql-6cdb49bbb-dlztf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-179337 exec mysql-6cdb49bbb-dlztf -- mysql -ppassword -e "show databases;": exit status 1 (130.478997ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 19:10:38.927609   16607 retry.go:31] will retry after 635.779475ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-179337 exec mysql-6cdb49bbb-dlztf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-179337 exec mysql-6cdb49bbb-dlztf -- mysql -ppassword -e "show databases;": exit status 1 (120.155328ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 19:10:39.684611   16607 retry.go:31] will retry after 782.448105ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-179337 exec mysql-6cdb49bbb-dlztf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16607/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo cat /etc/test/nested/copy/16607/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16607.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo cat /etc/ssl/certs/16607.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16607.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo cat /usr/share/ca-certificates/16607.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/166072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo cat /etc/ssl/certs/166072.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/166072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo cat /usr/share/ca-certificates/166072.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-179337 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 ssh "sudo systemctl is-active docker": exit status 1 (256.442227ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 ssh "sudo systemctl is-active containerd": exit status 1 (262.253302ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-179337 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-179337 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-5zgrx" [738c1208-9c7b-4b48-ae31-9045e92f8834] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-5zgrx" [738c1208-9c7b-4b48-ae31-9045e92f8834] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.00431827s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
E1009 19:09:54.479165   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1315: Took "285.475363ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "49.464314ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "319.68644ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "49.468624ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdany-port1153591654/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728500995126308450" to /tmp/TestFunctionalparallelMountCmdany-port1153591654/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728500995126308450" to /tmp/TestFunctionalparallelMountCmdany-port1153591654/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728500995126308450" to /tmp/TestFunctionalparallelMountCmdany-port1153591654/001/test-1728500995126308450
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.468123ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:09:55.360083   16607 retry.go:31] will retry after 538.247893ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 19:09 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 19:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 19:09 test-1728500995126308450
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh cat /mount-9p/test-1728500995126308450
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-179337 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c34b24aa-999f-4475-b14c-54bb6642f1db] Pending
helpers_test.go:344: "busybox-mount" [c34b24aa-999f-4475-b14c-54bb6642f1db] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c34b24aa-999f-4475-b14c-54bb6642f1db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E1009 19:10:02.163130   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [c34b24aa-999f-4475-b14c-54bb6642f1db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004184192s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-179337 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdany-port1153591654/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 service list -o json
functional_test.go:1494: Took "346.122014ms" to run "out/minikube-linux-amd64 -p functional-179337 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.56:30333
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdspecific-port965828402/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.303523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:10:03.989610   16607 retry.go:31] will retry after 262.845297ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdspecific-port965828402/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 ssh "sudo umount -f /mount-9p": exit status 1 (250.273663ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-179337 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdspecific-port965828402/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.56:30333
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179337 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-179337
localhost/kicbase/echo-server:functional-179337
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179337 image ls --format short --alsologtostderr:
I1009 19:10:18.508136   28245 out.go:345] Setting OutFile to fd 1 ...
I1009 19:10:18.508730   28245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:18.508745   28245 out.go:358] Setting ErrFile to fd 2...
I1009 19:10:18.508753   28245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:18.509240   28245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
I1009 19:10:18.510297   28245 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:18.510409   28245 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:18.510746   28245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:18.510794   28245 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:18.525283   28245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
I1009 19:10:18.525833   28245 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:18.526456   28245 main.go:141] libmachine: Using API Version  1
I1009 19:10:18.526482   28245 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:18.526822   28245 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:18.527015   28245 main.go:141] libmachine: (functional-179337) Calling .GetState
I1009 19:10:18.529003   28245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:18.529047   28245 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:18.543590   28245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
I1009 19:10:18.543994   28245 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:18.544439   28245 main.go:141] libmachine: Using API Version  1
I1009 19:10:18.544458   28245 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:18.544853   28245 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:18.545058   28245 main.go:141] libmachine: (functional-179337) Calling .DriverName
I1009 19:10:18.545251   28245 ssh_runner.go:195] Run: systemctl --version
I1009 19:10:18.545278   28245 main.go:141] libmachine: (functional-179337) Calling .GetSSHHostname
I1009 19:10:18.548493   28245 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:18.548885   28245 main.go:141] libmachine: (functional-179337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:bd:6d", ip: ""} in network mk-functional-179337: {Iface:virbr1 ExpiryTime:2024-10-09 20:07:14 +0000 UTC Type:0 Mac:52:54:00:80:bd:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-179337 Clientid:01:52:54:00:80:bd:6d}
I1009 19:10:18.548915   28245 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined IP address 192.168.39.56 and MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:18.549034   28245 main.go:141] libmachine: (functional-179337) Calling .GetSSHPort
I1009 19:10:18.549186   28245 main.go:141] libmachine: (functional-179337) Calling .GetSSHKeyPath
I1009 19:10:18.549327   28245 main.go:141] libmachine: (functional-179337) Calling .GetSSHUsername
I1009 19:10:18.549477   28245 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/functional-179337/id_rsa Username:docker}
I1009 19:10:18.638756   28245 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 19:10:18.715677   28245 main.go:141] libmachine: Making call to close driver server
I1009 19:10:18.715695   28245 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:18.715964   28245 main.go:141] libmachine: (functional-179337) DBG | Closing plugin on server side
I1009 19:10:18.715977   28245 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:18.715992   28245 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 19:10:18.716002   28245 main.go:141] libmachine: Making call to close driver server
I1009 19:10:18.716014   28245 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:18.716236   28245 main.go:141] libmachine: (functional-179337) DBG | Closing plugin on server side
I1009 19:10:18.716241   28245 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:18.716273   28245 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179337 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-179337  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| localhost/minikube-local-cache-test     | functional-179337  | 45a70460000dd | 3.33kB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179337 image ls --format table --alsologtostderr:
I1009 19:10:20.609022   28387 out.go:345] Setting OutFile to fd 1 ...
I1009 19:10:20.609120   28387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:20.609128   28387 out.go:358] Setting ErrFile to fd 2...
I1009 19:10:20.609132   28387 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:20.609289   28387 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
I1009 19:10:20.609826   28387 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:20.609921   28387 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:20.610264   28387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:20.610304   28387 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:20.624570   28387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
I1009 19:10:20.625011   28387 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:20.625533   28387 main.go:141] libmachine: Using API Version  1
I1009 19:10:20.625554   28387 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:20.625938   28387 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:20.626099   28387 main.go:141] libmachine: (functional-179337) Calling .GetState
I1009 19:10:20.627879   28387 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:20.627918   28387 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:20.642118   28387 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37721
I1009 19:10:20.642588   28387 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:20.643137   28387 main.go:141] libmachine: Using API Version  1
I1009 19:10:20.643181   28387 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:20.643466   28387 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:20.643627   28387 main.go:141] libmachine: (functional-179337) Calling .DriverName
I1009 19:10:20.643809   28387 ssh_runner.go:195] Run: systemctl --version
I1009 19:10:20.643836   28387 main.go:141] libmachine: (functional-179337) Calling .GetSSHHostname
I1009 19:10:20.646484   28387 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:20.646881   28387 main.go:141] libmachine: (functional-179337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:bd:6d", ip: ""} in network mk-functional-179337: {Iface:virbr1 ExpiryTime:2024-10-09 20:07:14 +0000 UTC Type:0 Mac:52:54:00:80:bd:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-179337 Clientid:01:52:54:00:80:bd:6d}
I1009 19:10:20.646913   28387 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined IP address 192.168.39.56 and MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:20.647077   28387 main.go:141] libmachine: (functional-179337) Calling .GetSSHPort
I1009 19:10:20.647223   28387 main.go:141] libmachine: (functional-179337) Calling .GetSSHKeyPath
I1009 19:10:20.647359   28387 main.go:141] libmachine: (functional-179337) Calling .GetSSHUsername
I1009 19:10:20.647491   28387 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/functional-179337/id_rsa Username:docker}
I1009 19:10:20.726378   28387 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 19:10:20.761836   28387 main.go:141] libmachine: Making call to close driver server
I1009 19:10:20.761853   28387 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:20.762127   28387 main.go:141] libmachine: (functional-179337) DBG | Closing plugin on server side
I1009 19:10:20.762138   28387 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:20.762153   28387 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 19:10:20.762164   28387 main.go:141] libmachine: Making call to close driver server
I1009 19:10:20.762172   28387 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:20.762457   28387 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:20.762472   28387 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 19:10:20.762472   28387 main.go:141] libmachine: (functional-179337) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179337 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"873ed7
5102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"7f553
e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818028"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-179337"],"size":"4943877"},{"id":"45a70460000dd533c7b88c9736bf215459e81d7ee096342ceff96cc4b1987db1","repoDigests":["localhost/minikube-local-cache-test@sha256:e0a380891470e68a0fbee065f4156069d4639d045d292ee913c0dd070ef547fc"],"repoTags":["localhost/minikube-local-cache-test:functional-179337"],"size":"3330"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1
f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4
c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTag
s":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:c
b9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179337 image ls --format json --alsologtostderr:
I1009 19:10:20.402874   28363 out.go:345] Setting OutFile to fd 1 ...
I1009 19:10:20.402981   28363 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:20.402991   28363 out.go:358] Setting ErrFile to fd 2...
I1009 19:10:20.402995   28363 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:20.403203   28363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
I1009 19:10:20.403776   28363 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:20.403870   28363 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:20.404207   28363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:20.404248   28363 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:20.419081   28363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36165
I1009 19:10:20.419580   28363 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:20.420197   28363 main.go:141] libmachine: Using API Version  1
I1009 19:10:20.420219   28363 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:20.420590   28363 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:20.420770   28363 main.go:141] libmachine: (functional-179337) Calling .GetState
I1009 19:10:20.422476   28363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:20.422520   28363 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:20.436674   28363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40157
I1009 19:10:20.437162   28363 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:20.437667   28363 main.go:141] libmachine: Using API Version  1
I1009 19:10:20.437686   28363 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:20.437964   28363 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:20.438141   28363 main.go:141] libmachine: (functional-179337) Calling .DriverName
I1009 19:10:20.438314   28363 ssh_runner.go:195] Run: systemctl --version
I1009 19:10:20.438335   28363 main.go:141] libmachine: (functional-179337) Calling .GetSSHHostname
I1009 19:10:20.440799   28363 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:20.441145   28363 main.go:141] libmachine: (functional-179337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:bd:6d", ip: ""} in network mk-functional-179337: {Iface:virbr1 ExpiryTime:2024-10-09 20:07:14 +0000 UTC Type:0 Mac:52:54:00:80:bd:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-179337 Clientid:01:52:54:00:80:bd:6d}
I1009 19:10:20.441166   28363 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined IP address 192.168.39.56 and MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:20.441291   28363 main.go:141] libmachine: (functional-179337) Calling .GetSSHPort
I1009 19:10:20.441446   28363 main.go:141] libmachine: (functional-179337) Calling .GetSSHKeyPath
I1009 19:10:20.441571   28363 main.go:141] libmachine: (functional-179337) Calling .GetSSHUsername
I1009 19:10:20.441695   28363 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/functional-179337/id_rsa Username:docker}
I1009 19:10:20.521777   28363 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 19:10:20.562051   28363 main.go:141] libmachine: Making call to close driver server
I1009 19:10:20.562066   28363 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:20.562394   28363 main.go:141] libmachine: (functional-179337) DBG | Closing plugin on server side
I1009 19:10:20.562397   28363 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:20.562432   28363 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 19:10:20.562446   28363 main.go:141] libmachine: Making call to close driver server
I1009 19:10:20.562455   28363 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:20.562672   28363 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:20.562688   28363 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 19:10:20.562708   28363 main.go:141] libmachine: (functional-179337) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179337 image ls --format yaml --alsologtostderr:
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-179337
size: "4943877"
- id: 45a70460000dd533c7b88c9736bf215459e81d7ee096342ceff96cc4b1987db1
repoDigests:
- localhost/minikube-local-cache-test@sha256:e0a380891470e68a0fbee065f4156069d4639d045d292ee913c0dd070ef547fc
repoTags:
- localhost/minikube-local-cache-test:functional-179337
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179337 image ls --format yaml --alsologtostderr:
I1009 19:10:18.775396   28269 out.go:345] Setting OutFile to fd 1 ...
I1009 19:10:18.775511   28269 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:18.775520   28269 out.go:358] Setting ErrFile to fd 2...
I1009 19:10:18.775533   28269 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:18.775824   28269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
I1009 19:10:18.776589   28269 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:18.776732   28269 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:18.777195   28269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:18.777289   28269 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:18.793480   28269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33095
I1009 19:10:18.795600   28269 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:18.797459   28269 main.go:141] libmachine: Using API Version  1
I1009 19:10:18.797482   28269 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:18.797832   28269 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:18.798013   28269 main.go:141] libmachine: (functional-179337) Calling .GetState
I1009 19:10:18.800404   28269 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:18.800448   28269 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:18.817772   28269 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35723
I1009 19:10:18.818225   28269 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:18.818861   28269 main.go:141] libmachine: Using API Version  1
I1009 19:10:18.818887   28269 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:18.819236   28269 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:18.819540   28269 main.go:141] libmachine: (functional-179337) Calling .DriverName
I1009 19:10:18.819784   28269 ssh_runner.go:195] Run: systemctl --version
I1009 19:10:18.819826   28269 main.go:141] libmachine: (functional-179337) Calling .GetSSHHostname
I1009 19:10:18.822589   28269 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:18.822919   28269 main.go:141] libmachine: (functional-179337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:bd:6d", ip: ""} in network mk-functional-179337: {Iface:virbr1 ExpiryTime:2024-10-09 20:07:14 +0000 UTC Type:0 Mac:52:54:00:80:bd:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-179337 Clientid:01:52:54:00:80:bd:6d}
I1009 19:10:18.822937   28269 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined IP address 192.168.39.56 and MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:18.823174   28269 main.go:141] libmachine: (functional-179337) Calling .GetSSHPort
I1009 19:10:18.823346   28269 main.go:141] libmachine: (functional-179337) Calling .GetSSHKeyPath
I1009 19:10:18.823529   28269 main.go:141] libmachine: (functional-179337) Calling .GetSSHUsername
I1009 19:10:18.823648   28269 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/functional-179337/id_rsa Username:docker}
I1009 19:10:18.934207   28269 ssh_runner.go:195] Run: sudo crictl images --output json
I1009 19:10:18.969935   28269 main.go:141] libmachine: Making call to close driver server
I1009 19:10:18.969948   28269 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:18.970243   28269 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:18.970261   28269 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 19:10:18.970283   28269 main.go:141] libmachine: Making call to close driver server
I1009 19:10:18.970248   28269 main.go:141] libmachine: (functional-179337) DBG | Closing plugin on server side
I1009 19:10:18.970291   28269 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:18.970552   28269 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:18.970566   28269 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 ssh pgrep buildkitd: exit status 1 (187.852028ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image build -t localhost/my-image:functional-179337 testdata/build --alsologtostderr
2024/10/09 19:10:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 image build -t localhost/my-image:functional-179337 testdata/build --alsologtostderr: (3.958660434s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-179337 image build -t localhost/my-image:functional-179337 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4c6095df300
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-179337
--> 1815b7b4d7d
Successfully tagged localhost/my-image:functional-179337
1815b7b4d7dda6f13b6b406380e11cc18cf94c2ef5b416b103d3565004eb4c82
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-179337 image build -t localhost/my-image:functional-179337 testdata/build --alsologtostderr:
I1009 19:10:19.249597   28322 out.go:345] Setting OutFile to fd 1 ...
I1009 19:10:19.249711   28322 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:19.249719   28322 out.go:358] Setting ErrFile to fd 2...
I1009 19:10:19.249723   28322 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:10:19.249893   28322 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
I1009 19:10:19.250445   28322 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:19.250879   28322 config.go:182] Loaded profile config "functional-179337": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:10:19.251263   28322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:19.251302   28322 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:19.265816   28322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
I1009 19:10:19.266304   28322 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:19.266919   28322 main.go:141] libmachine: Using API Version  1
I1009 19:10:19.266950   28322 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:19.267327   28322 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:19.267536   28322 main.go:141] libmachine: (functional-179337) Calling .GetState
I1009 19:10:19.269477   28322 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1009 19:10:19.269513   28322 main.go:141] libmachine: Launching plugin server for driver kvm2
I1009 19:10:19.283812   28322 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34835
I1009 19:10:19.284283   28322 main.go:141] libmachine: () Calling .GetVersion
I1009 19:10:19.284765   28322 main.go:141] libmachine: Using API Version  1
I1009 19:10:19.284786   28322 main.go:141] libmachine: () Calling .SetConfigRaw
I1009 19:10:19.285097   28322 main.go:141] libmachine: () Calling .GetMachineName
I1009 19:10:19.285306   28322 main.go:141] libmachine: (functional-179337) Calling .DriverName
I1009 19:10:19.285490   28322 ssh_runner.go:195] Run: systemctl --version
I1009 19:10:19.285521   28322 main.go:141] libmachine: (functional-179337) Calling .GetSSHHostname
I1009 19:10:19.288327   28322 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:19.288710   28322 main.go:141] libmachine: (functional-179337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:bd:6d", ip: ""} in network mk-functional-179337: {Iface:virbr1 ExpiryTime:2024-10-09 20:07:14 +0000 UTC Type:0 Mac:52:54:00:80:bd:6d Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:functional-179337 Clientid:01:52:54:00:80:bd:6d}
I1009 19:10:19.288741   28322 main.go:141] libmachine: (functional-179337) DBG | domain functional-179337 has defined IP address 192.168.39.56 and MAC address 52:54:00:80:bd:6d in network mk-functional-179337
I1009 19:10:19.288894   28322 main.go:141] libmachine: (functional-179337) Calling .GetSSHPort
I1009 19:10:19.289050   28322 main.go:141] libmachine: (functional-179337) Calling .GetSSHKeyPath
I1009 19:10:19.289176   28322 main.go:141] libmachine: (functional-179337) Calling .GetSSHUsername
I1009 19:10:19.289314   28322 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/functional-179337/id_rsa Username:docker}
I1009 19:10:19.381173   28322 build_images.go:161] Building image from path: /tmp/build.1410518397.tar
I1009 19:10:19.381244   28322 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 19:10:19.394945   28322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1410518397.tar
I1009 19:10:19.401435   28322 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1410518397.tar: stat -c "%s %y" /var/lib/minikube/build/build.1410518397.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1410518397.tar': No such file or directory
I1009 19:10:19.401480   28322 ssh_runner.go:362] scp /tmp/build.1410518397.tar --> /var/lib/minikube/build/build.1410518397.tar (3072 bytes)
I1009 19:10:19.438848   28322 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1410518397
I1009 19:10:19.450395   28322 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1410518397 -xf /var/lib/minikube/build/build.1410518397.tar
I1009 19:10:19.464294   28322 crio.go:315] Building image: /var/lib/minikube/build/build.1410518397
I1009 19:10:19.464390   28322 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-179337 /var/lib/minikube/build/build.1410518397 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1009 19:10:23.096051   28322 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-179337 /var/lib/minikube/build/build.1410518397 --cgroup-manager=cgroupfs: (3.631633725s)
I1009 19:10:23.096116   28322 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1410518397
I1009 19:10:23.121369   28322 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1410518397.tar
I1009 19:10:23.159073   28322 build_images.go:217] Built localhost/my-image:functional-179337 from /tmp/build.1410518397.tar
I1009 19:10:23.159108   28322 build_images.go:133] succeeded building to: functional-179337
I1009 19:10:23.159113   28322 build_images.go:134] failed building to: 
I1009 19:10:23.159156   28322 main.go:141] libmachine: Making call to close driver server
I1009 19:10:23.159173   28322 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:23.159445   28322 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:23.159461   28322 main.go:141] libmachine: Making call to close connection to plugin binary
I1009 19:10:23.159471   28322 main.go:141] libmachine: Making call to close driver server
I1009 19:10:23.159478   28322 main.go:141] libmachine: (functional-179337) Calling .Close
I1009 19:10:23.159690   28322 main.go:141] libmachine: (functional-179337) DBG | Closing plugin on server side
I1009 19:10:23.159725   28322 main.go:141] libmachine: Successfully made call to close driver server
I1009 19:10:23.159733   28322 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls
E1009 19:10:32.887431   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.082936862s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-179337
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1000281901/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1000281901/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1000281901/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T" /mount1: exit status 1 (280.237991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:10:05.716483   16607 retry.go:31] will retry after 299.205173ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-179337 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1000281901/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1000281901/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-179337 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1000281901/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image load --daemon kicbase/echo-server:functional-179337 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 image load --daemon kicbase/echo-server:functional-179337 --alsologtostderr: (2.771091303s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image load --daemon kicbase/echo-server:functional-179337 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-179337
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image load --daemon kicbase/echo-server:functional-179337 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls
E1009 19:10:12.405330   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image save kicbase/echo-server:functional-179337 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image rm kicbase/echo-server:functional-179337 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-179337 image rm kicbase/echo-server:functional-179337 --alsologtostderr: (2.431233221s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-179337
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-179337 image save --daemon kicbase/echo-server:functional-179337 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-179337
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.58s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-179337
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-179337
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-179337
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (199.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-199780 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1009 19:11:13.849594   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:12:35.771552   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-199780 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m18.371361111s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (199.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-199780 -- rollout status deployment/busybox: (5.164193135s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-6v84n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-8946j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-9j59h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-6v84n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-8946j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-9j59h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-6v84n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-8946j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-9j59h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-6v84n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-6v84n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-8946j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-8946j -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-9j59h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-199780 -- exec busybox-7dff88458-9j59h -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-199780 -v=7 --alsologtostderr
E1009 19:14:51.613050   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:51.619445   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:51.630781   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:51.652133   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:51.693530   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:51.774941   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:51.908519   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:51.936994   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:52.258609   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:52.900880   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:54.183051   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:14:56.745049   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:15:01.866832   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-199780 -v=7 --alsologtostderr: (55.523587638s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-199780 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp testdata/cp-test.txt ha-199780:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780:/home/docker/cp-test.txt ha-199780-m02:/home/docker/cp-test_ha-199780_ha-199780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m02 "sudo cat /home/docker/cp-test_ha-199780_ha-199780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780:/home/docker/cp-test.txt ha-199780-m03:/home/docker/cp-test_ha-199780_ha-199780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m03 "sudo cat /home/docker/cp-test_ha-199780_ha-199780-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780:/home/docker/cp-test.txt ha-199780-m04:/home/docker/cp-test_ha-199780_ha-199780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m04 "sudo cat /home/docker/cp-test_ha-199780_ha-199780-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp testdata/cp-test.txt ha-199780-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m02:/home/docker/cp-test.txt ha-199780:/home/docker/cp-test_ha-199780-m02_ha-199780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m02 "sudo cat /home/docker/cp-test.txt"
E1009 19:15:12.108247   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780 "sudo cat /home/docker/cp-test_ha-199780-m02_ha-199780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m02:/home/docker/cp-test.txt ha-199780-m03:/home/docker/cp-test_ha-199780-m02_ha-199780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m03 "sudo cat /home/docker/cp-test_ha-199780-m02_ha-199780-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m02:/home/docker/cp-test.txt ha-199780-m04:/home/docker/cp-test_ha-199780-m02_ha-199780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m04 "sudo cat /home/docker/cp-test_ha-199780-m02_ha-199780-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp testdata/cp-test.txt ha-199780-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt ha-199780:/home/docker/cp-test_ha-199780-m03_ha-199780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780 "sudo cat /home/docker/cp-test_ha-199780-m03_ha-199780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt ha-199780-m02:/home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m02 "sudo cat /home/docker/cp-test_ha-199780-m03_ha-199780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m03:/home/docker/cp-test.txt ha-199780-m04:/home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m04 "sudo cat /home/docker/cp-test_ha-199780-m03_ha-199780-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp testdata/cp-test.txt ha-199780-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1421987969/001/cp-test_ha-199780-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt ha-199780:/home/docker/cp-test_ha-199780-m04_ha-199780.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780 "sudo cat /home/docker/cp-test_ha-199780-m04_ha-199780.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt ha-199780-m02:/home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m02 "sudo cat /home/docker/cp-test_ha-199780-m04_ha-199780-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 cp ha-199780-m04:/home/docker/cp-test.txt ha-199780-m03:/home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 ssh -n ha-199780-m03 "sudo cat /home/docker/cp-test_ha-199780-m04_ha-199780-m03.txt"
E1009 19:15:19.613749   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 node delete m03 -v=7 --alsologtostderr
E1009 19:24:51.614901   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:24:51.908355   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-199780 node delete m03 -v=7 --alsologtostderr: (15.934478231s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (357.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-199780 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1009 19:29:51.614298   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:29:51.909009   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:31:14.680517   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-199780 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m56.662906943s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (357.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-199780 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-199780 --control-plane -v=7 --alsologtostderr: (1m19.897654423s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-199780 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-165605 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1009 19:34:51.614405   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:34:51.909161   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-165605 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.132792822s)
--- PASS: TestJSONOutput/start/Command (80.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-165605 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-165605 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.32s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-165605 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-165605 --output=json --user=testUser: (7.324076189s)
--- PASS: TestJSONOutput/stop/Command (7.32s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-166513 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-166513 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.633833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b6f7bbb6-1563-4946-a3ea-e3dce8c0c900","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-166513] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8539d675-953e-416d-beae-d5819bde1f30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"efd0ab26-5726-4823-9296-e72d4565b26d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"78e4e006-3c72-4514-ae34-72c83d8c9c60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig"}}
	{"specversion":"1.0","id":"0f2c3ee8-feef-4d8b-8195-ca27d9b4dd71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube"}}
	{"specversion":"1.0","id":"52a27529-dd93-4a47-9f6c-64155e32f4f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f8e56e37-158e-4c51-bbf1-d205285c3edb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"60f85e8e-13ec-4472-9487-1280c32114bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-166513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-166513
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (87.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-129938 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-129938 --driver=kvm2  --container-runtime=crio: (46.010457853s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-143818 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-143818 --driver=kvm2  --container-runtime=crio: (38.703300373s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-129938
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-143818
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-143818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-143818
helpers_test.go:175: Cleaning up "first-129938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-129938
--- PASS: TestMinikubeProfile (87.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-369230 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-369230 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.502428418s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-369230 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-369230 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (25.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-384801 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-384801 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (24.149236371s)
--- PASS: TestMountStart/serial/StartWithMountSecond (25.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-384801 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-384801 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-369230 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-384801 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-384801 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-384801
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-384801: (1.275332773s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.02s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-384801
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-384801: (24.023887005s)
--- PASS: TestMountStart/serial/RestartStopped (25.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-384801 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-384801 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-707643 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1009 19:39:51.612818   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:51.908473   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-707643 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.980801201s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-707643 -- rollout status deployment/busybox: (5.620349842s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-9plt5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-hhq8q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-9plt5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-hhq8q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-9plt5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-hhq8q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-9plt5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-9plt5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-hhq8q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-707643 -- exec busybox-7dff88458-hhq8q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-707643 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-707643 -v 3 --alsologtostderr: (49.015586501s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-707643 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.57s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp testdata/cp-test.txt multinode-707643:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3630358187/001/cp-test_multinode-707643.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643:/home/docker/cp-test.txt multinode-707643-m02:/home/docker/cp-test_multinode-707643_multinode-707643-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m02 "sudo cat /home/docker/cp-test_multinode-707643_multinode-707643-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643:/home/docker/cp-test.txt multinode-707643-m03:/home/docker/cp-test_multinode-707643_multinode-707643-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m03 "sudo cat /home/docker/cp-test_multinode-707643_multinode-707643-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp testdata/cp-test.txt multinode-707643-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3630358187/001/cp-test_multinode-707643-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643-m02:/home/docker/cp-test.txt multinode-707643:/home/docker/cp-test_multinode-707643-m02_multinode-707643.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643 "sudo cat /home/docker/cp-test_multinode-707643-m02_multinode-707643.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643-m02:/home/docker/cp-test.txt multinode-707643-m03:/home/docker/cp-test_multinode-707643-m02_multinode-707643-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m03 "sudo cat /home/docker/cp-test_multinode-707643-m02_multinode-707643-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp testdata/cp-test.txt multinode-707643-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3630358187/001/cp-test_multinode-707643-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt multinode-707643:/home/docker/cp-test_multinode-707643-m03_multinode-707643.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643 "sudo cat /home/docker/cp-test_multinode-707643-m03_multinode-707643.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 cp multinode-707643-m03:/home/docker/cp-test.txt multinode-707643-m02:/home/docker/cp-test_multinode-707643-m03_multinode-707643-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 ssh -n multinode-707643-m02 "sudo cat /home/docker/cp-test_multinode-707643-m03_multinode-707643-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-707643 node stop m03: (1.531690449s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-707643 status: exit status 7 (418.188352ms)

                                                
                                                
-- stdout --
	multinode-707643
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-707643-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-707643-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr: exit status 7 (419.901513ms)

                                                
                                                
-- stdout --
	multinode-707643
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-707643-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-707643-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:42:05.166706   45995 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:42:05.166933   45995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:05.166952   45995 out.go:358] Setting ErrFile to fd 2...
	I1009 19:42:05.166958   45995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:42:05.167191   45995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 19:42:05.167368   45995 out.go:352] Setting JSON to false
	I1009 19:42:05.167390   45995 mustload.go:65] Loading cluster: multinode-707643
	I1009 19:42:05.167528   45995 notify.go:220] Checking for updates...
	I1009 19:42:05.167821   45995 config.go:182] Loaded profile config "multinode-707643": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:42:05.167840   45995 status.go:174] checking status of multinode-707643 ...
	I1009 19:42:05.168393   45995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:05.168459   45995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:05.183927   45995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38905
	I1009 19:42:05.184429   45995 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:05.185097   45995 main.go:141] libmachine: Using API Version  1
	I1009 19:42:05.185146   45995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:05.185532   45995 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:05.185737   45995 main.go:141] libmachine: (multinode-707643) Calling .GetState
	I1009 19:42:05.187520   45995 status.go:371] multinode-707643 host status = "Running" (err=<nil>)
	I1009 19:42:05.187538   45995 host.go:66] Checking if "multinode-707643" exists ...
	I1009 19:42:05.187807   45995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:05.187839   45995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:05.202549   45995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42603
	I1009 19:42:05.203017   45995 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:05.203552   45995 main.go:141] libmachine: Using API Version  1
	I1009 19:42:05.203573   45995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:05.203881   45995 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:05.204041   45995 main.go:141] libmachine: (multinode-707643) Calling .GetIP
	I1009 19:42:05.206771   45995 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:42:05.207209   45995 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:42:05.207230   45995 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:42:05.207402   45995 host.go:66] Checking if "multinode-707643" exists ...
	I1009 19:42:05.207698   45995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:05.207731   45995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:05.222704   45995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I1009 19:42:05.223137   45995 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:05.223615   45995 main.go:141] libmachine: Using API Version  1
	I1009 19:42:05.223634   45995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:05.223995   45995 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:05.224182   45995 main.go:141] libmachine: (multinode-707643) Calling .DriverName
	I1009 19:42:05.224392   45995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:42:05.224426   45995 main.go:141] libmachine: (multinode-707643) Calling .GetSSHHostname
	I1009 19:42:05.226935   45995 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:42:05.227407   45995 main.go:141] libmachine: (multinode-707643) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:ea:24", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:39:19 +0000 UTC Type:0 Mac:52:54:00:70:ea:24 Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-707643 Clientid:01:52:54:00:70:ea:24}
	I1009 19:42:05.227442   45995 main.go:141] libmachine: (multinode-707643) DBG | domain multinode-707643 has defined IP address 192.168.39.10 and MAC address 52:54:00:70:ea:24 in network mk-multinode-707643
	I1009 19:42:05.227561   45995 main.go:141] libmachine: (multinode-707643) Calling .GetSSHPort
	I1009 19:42:05.227720   45995 main.go:141] libmachine: (multinode-707643) Calling .GetSSHKeyPath
	I1009 19:42:05.227855   45995 main.go:141] libmachine: (multinode-707643) Calling .GetSSHUsername
	I1009 19:42:05.227962   45995 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643/id_rsa Username:docker}
	I1009 19:42:05.306870   45995 ssh_runner.go:195] Run: systemctl --version
	I1009 19:42:05.312776   45995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:42:05.328087   45995 kubeconfig.go:125] found "multinode-707643" server: "https://192.168.39.10:8443"
	I1009 19:42:05.328118   45995 api_server.go:166] Checking apiserver status ...
	I1009 19:42:05.328159   45995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:42:05.343053   45995 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup
	W1009 19:42:05.353174   45995 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1116/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1009 19:42:05.353215   45995 ssh_runner.go:195] Run: ls
	I1009 19:42:05.357800   45995 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I1009 19:42:05.361741   45995 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I1009 19:42:05.361763   45995 status.go:463] multinode-707643 apiserver status = Running (err=<nil>)
	I1009 19:42:05.361773   45995 status.go:176] multinode-707643 status: &{Name:multinode-707643 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:42:05.361788   45995 status.go:174] checking status of multinode-707643-m02 ...
	I1009 19:42:05.362104   45995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:05.362140   45995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:05.377935   45995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I1009 19:42:05.378432   45995 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:05.379004   45995 main.go:141] libmachine: Using API Version  1
	I1009 19:42:05.379028   45995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:05.379344   45995 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:05.379550   45995 main.go:141] libmachine: (multinode-707643-m02) Calling .GetState
	I1009 19:42:05.381032   45995 status.go:371] multinode-707643-m02 host status = "Running" (err=<nil>)
	I1009 19:42:05.381049   45995 host.go:66] Checking if "multinode-707643-m02" exists ...
	I1009 19:42:05.381333   45995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:05.381364   45995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:05.397321   45995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I1009 19:42:05.397849   45995 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:05.398344   45995 main.go:141] libmachine: Using API Version  1
	I1009 19:42:05.398365   45995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:05.398692   45995 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:05.398870   45995 main.go:141] libmachine: (multinode-707643-m02) Calling .GetIP
	I1009 19:42:05.401729   45995 main.go:141] libmachine: (multinode-707643-m02) DBG | domain multinode-707643-m02 has defined MAC address 52:54:00:44:c9:e4 in network mk-multinode-707643
	I1009 19:42:05.402277   45995 main.go:141] libmachine: (multinode-707643-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c9:e4", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:40:25 +0000 UTC Type:0 Mac:52:54:00:44:c9:e4 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-707643-m02 Clientid:01:52:54:00:44:c9:e4}
	I1009 19:42:05.402320   45995 main.go:141] libmachine: (multinode-707643-m02) DBG | domain multinode-707643-m02 has defined IP address 192.168.39.115 and MAC address 52:54:00:44:c9:e4 in network mk-multinode-707643
	I1009 19:42:05.402477   45995 host.go:66] Checking if "multinode-707643-m02" exists ...
	I1009 19:42:05.402799   45995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:05.402853   45995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:05.418226   45995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I1009 19:42:05.418680   45995 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:05.419151   45995 main.go:141] libmachine: Using API Version  1
	I1009 19:42:05.419170   45995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:05.419478   45995 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:05.419746   45995 main.go:141] libmachine: (multinode-707643-m02) Calling .DriverName
	I1009 19:42:05.419980   45995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:42:05.420007   45995 main.go:141] libmachine: (multinode-707643-m02) Calling .GetSSHHostname
	I1009 19:42:05.423154   45995 main.go:141] libmachine: (multinode-707643-m02) DBG | domain multinode-707643-m02 has defined MAC address 52:54:00:44:c9:e4 in network mk-multinode-707643
	I1009 19:42:05.423619   45995 main.go:141] libmachine: (multinode-707643-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:44:c9:e4", ip: ""} in network mk-multinode-707643: {Iface:virbr1 ExpiryTime:2024-10-09 20:40:25 +0000 UTC Type:0 Mac:52:54:00:44:c9:e4 Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:multinode-707643-m02 Clientid:01:52:54:00:44:c9:e4}
	I1009 19:42:05.423653   45995 main.go:141] libmachine: (multinode-707643-m02) DBG | domain multinode-707643-m02 has defined IP address 192.168.39.115 and MAC address 52:54:00:44:c9:e4 in network mk-multinode-707643
	I1009 19:42:05.423847   45995 main.go:141] libmachine: (multinode-707643-m02) Calling .GetSSHPort
	I1009 19:42:05.424010   45995 main.go:141] libmachine: (multinode-707643-m02) Calling .GetSSHKeyPath
	I1009 19:42:05.424160   45995 main.go:141] libmachine: (multinode-707643-m02) Calling .GetSSHUsername
	I1009 19:42:05.424318   45995 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19780-9412/.minikube/machines/multinode-707643-m02/id_rsa Username:docker}
	I1009 19:42:05.506533   45995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:42:05.520220   45995 status.go:176] multinode-707643-m02 status: &{Name:multinode-707643-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:42:05.520274   45995 status.go:174] checking status of multinode-707643-m03 ...
	I1009 19:42:05.520786   45995 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1009 19:42:05.520838   45995 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1009 19:42:05.536535   45995 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I1009 19:42:05.537021   45995 main.go:141] libmachine: () Calling .GetVersion
	I1009 19:42:05.537722   45995 main.go:141] libmachine: Using API Version  1
	I1009 19:42:05.537753   45995 main.go:141] libmachine: () Calling .SetConfigRaw
	I1009 19:42:05.538061   45995 main.go:141] libmachine: () Calling .GetMachineName
	I1009 19:42:05.538233   45995 main.go:141] libmachine: (multinode-707643-m03) Calling .GetState
	I1009 19:42:05.539761   45995 status.go:371] multinode-707643-m03 host status = "Stopped" (err=<nil>)
	I1009 19:42:05.539776   45995 status.go:384] host is not running, skipping remaining checks
	I1009 19:42:05.539781   45995 status.go:176] multinode-707643-m03 status: &{Name:multinode-707643-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.37s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-707643 node start m03 -v=7 --alsologtostderr: (39.837542296s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-707643 node delete m03: (1.487828122s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (182.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-707643 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-707643 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m1.997851807s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-707643 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (182.50s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-707643
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-707643-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-707643-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (61.832933ms)

                                                
                                                
-- stdout --
	* [multinode-707643-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-707643-m02' is duplicated with machine name 'multinode-707643-m02' in profile 'multinode-707643'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-707643-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-707643-m03 --driver=kvm2  --container-runtime=crio: (42.981316372s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-707643
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-707643: exit status 80 (210.179165ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-707643 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-707643-m03 already exists in multinode-707643-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-707643-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.07s)

                                                
                                    
x
+
TestScheduledStopUnix (115.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-000241 --memory=2048 --driver=kvm2  --container-runtime=crio
E1009 19:59:34.983245   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-000241 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.979243811s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-000241 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-000241 -n scheduled-stop-000241
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-000241 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 19:59:47.266963   16607 retry.go:31] will retry after 97.473µs: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.268118   16607 retry.go:31] will retry after 100.191µs: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.269269   16607 retry.go:31] will retry after 232.011µs: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.270395   16607 retry.go:31] will retry after 466.665µs: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.271511   16607 retry.go:31] will retry after 539.831µs: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.272629   16607 retry.go:31] will retry after 655.788µs: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.273738   16607 retry.go:31] will retry after 1.666828ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.275934   16607 retry.go:31] will retry after 2.322883ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.279106   16607 retry.go:31] will retry after 2.676007ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.282274   16607 retry.go:31] will retry after 4.921441ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.287487   16607 retry.go:31] will retry after 8.470354ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.296746   16607 retry.go:31] will retry after 8.965049ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.305982   16607 retry.go:31] will retry after 8.322901ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.315233   16607 retry.go:31] will retry after 15.883568ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
I1009 19:59:47.331458   16607 retry.go:31] will retry after 28.807799ms: open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/scheduled-stop-000241/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-000241 --cancel-scheduled
E1009 19:59:51.613541   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:59:51.909314   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-000241 -n scheduled-stop-000241
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-000241
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-000241 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-000241
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-000241: exit status 7 (64.938567ms)

                                                
                                                
-- stdout --
	scheduled-stop-000241
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-000241 -n scheduled-stop-000241
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-000241 -n scheduled-stop-000241: exit status 7 (63.89104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-000241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-000241
--- PASS: TestScheduledStopUnix (115.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (209.64s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2548641316 start -p running-upgrade-200546 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2548641316 start -p running-upgrade-200546 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m3.546779341s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-200546 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-200546 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m22.083342105s)
helpers_test.go:175: Cleaning up "running-upgrade-200546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-200546
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-200546: (1.144019065s)
--- PASS: TestRunningBinaryUpgrade (209.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (165.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1463357102 start -p stopped-upgrade-111682 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1463357102 start -p stopped-upgrade-111682 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m35.194344528s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1463357102 -p stopped-upgrade-111682 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1463357102 -p stopped-upgrade-111682 stop: (1.455441406s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-111682 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-111682 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.863972033s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (165.51s)

                                                
                                    
x
+
TestPause/serial/Start (102.63s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-739381 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-739381 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m42.633047471s)
--- PASS: TestPause/serial/Start (102.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-111682
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-111682: (1.02576692s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-665212 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-665212 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (100.064556ms)

                                                
                                                
-- stdout --
	* [false-665212] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 20:05:06.153371   56836 out.go:345] Setting OutFile to fd 1 ...
	I1009 20:05:06.153478   56836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:05:06.153487   56836 out.go:358] Setting ErrFile to fd 2...
	I1009 20:05:06.153491   56836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 20:05:06.153663   56836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9412/.minikube/bin
	I1009 20:05:06.154190   56836 out.go:352] Setting JSON to false
	I1009 20:05:06.155090   56836 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6447,"bootTime":1728497859,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 20:05:06.155184   56836 start.go:139] virtualization: kvm guest
	I1009 20:05:06.157352   56836 out.go:177] * [false-665212] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 20:05:06.158519   56836 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 20:05:06.158533   56836 notify.go:220] Checking for updates...
	I1009 20:05:06.160803   56836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 20:05:06.161881   56836 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	I1009 20:05:06.162993   56836 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	I1009 20:05:06.164055   56836 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 20:05:06.165057   56836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 20:05:06.166553   56836 config.go:182] Loaded profile config "cert-expiration-261596": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:05:06.166661   56836 config.go:182] Loaded profile config "cert-options-744883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 20:05:06.166757   56836 config.go:182] Loaded profile config "kubernetes-upgrade-790037": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1009 20:05:06.166846   56836 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 20:05:06.202661   56836 out.go:177] * Using the kvm2 driver based on user configuration
	I1009 20:05:06.204045   56836 start.go:297] selected driver: kvm2
	I1009 20:05:06.204058   56836 start.go:901] validating driver "kvm2" against <nil>
	I1009 20:05:06.204069   56836 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 20:05:06.206268   56836 out.go:201] 
	W1009 20:05:06.207678   56836 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1009 20:05:06.208889   56836 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-665212 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-665212" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-665212

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-665212"

                                                
                                                
----------------------- debugLogs end: false-665212 [took: 2.759099847s] --------------------------------
helpers_test.go:175: Cleaning up "false-665212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-665212
--- PASS: TestNetworkPlugins/group/false (3.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-615869 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-615869 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (64.123494ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-615869] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9412/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9412/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (53.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-615869 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-615869 --driver=kvm2  --container-runtime=crio: (53.184891044s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-615869 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (53.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-615869 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-615869 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.277331343s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-615869 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-615869 status -o json: exit status 2 (244.178334ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-615869","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-615869
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-615869 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-615869 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.298594418s)
--- PASS: TestNoKubernetes/serial/Start (28.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-615869 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-615869 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.1601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (11.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (11.183046544s)
--- PASS: TestNoKubernetes/serial/ProfileList (11.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-615869
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-615869: (1.295387635s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (32.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-615869 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-615869 --driver=kvm2  --container-runtime=crio: (32.629929595s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (32.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (97.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-480205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-480205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m37.652918569s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (97.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-615869 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-615869 "sudo systemctl is-active --quiet service kubelet": exit status 1 (204.921392ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (111.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-503330 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-503330 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m51.826185481s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (111.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-480205 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d2d8238b-b1d8-4770-90d6-27087a4a95b5] Pending
helpers_test.go:344: "busybox" [d2d8238b-b1d8-4770-90d6-27087a4a95b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d2d8238b-b1d8-4770-90d6-27087a4a95b5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004401762s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-480205 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-480205 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-480205 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-733270 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1009 20:09:51.612877   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:09:51.908529   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/addons-421083/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-733270 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m23.936378548s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-503330 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0a5813c5-5e96-4a9a-8900-d69f2a6f7f4e] Pending
helpers_test.go:344: "busybox" [0a5813c5-5e96-4a9a-8900-d69f2a6f7f4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0a5813c5-5e96-4a9a-8900-d69f2a6f7f4e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.153898447s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-503330 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-503330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-503330 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-733270 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e7d10f59-a37c-4fad-b61d-5883a31cc28e] Pending
helpers_test.go:344: "busybox" [e7d10f59-a37c-4fad-b61d-5883a31cc28e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e7d10f59-a37c-4fad-b61d-5883a31cc28e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003904214s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-733270 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-733270 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-733270 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (650.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-480205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-480205 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (10m50.633371526s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-480205 -n no-preload-480205
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (650.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (569s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-503330 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-503330 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m28.734234611s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-503330 -n embed-certs-503330
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (569.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (531.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-733270 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-733270 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m51.022846453s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-733270 -n default-k8s-diff-port-733270
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (531.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-169021 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-169021 --alsologtostderr -v=3: (3.378662166s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-169021 -n old-k8s-version-169021: exit status 7 (62.001814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-169021 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-203991 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-203991 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (47.55059129s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (57.064167362s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-203991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-203991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.20405344s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-203991 --alsologtostderr -v=3
E1009 20:37:54.687312   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/functional-179337/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-203991 --alsologtostderr -v=3: (10.594718375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-203991 -n newest-cni-203991
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-203991 -n newest-cni-203991: exit status 7 (64.03798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-203991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (41.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-203991 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-203991 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (40.83040808s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-203991 -n newest-cni-203991
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (41.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m22.301490911s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-665212 "pgrep -a kubelet"
I1009 20:38:33.397704   16607 config.go:182] Loaded profile config "auto-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-665212 replace --force -f testdata/netcat-deployment.yaml
I1009 20:38:33.708383   16607 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8gkpl" [44d7c752-14fa-4ecf-9d33-7e2bda710752] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8gkpl" [44d7c752-14fa-4ecf-9d33-7e2bda710752] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003627284s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-203991 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-203991 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-203991 --alsologtostderr -v=1: (1.902007915s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-203991 -n newest-cni-203991
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-203991 -n newest-cni-203991: exit status 2 (281.317568ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-203991 -n newest-cni-203991
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-203991 -n newest-cni-203991: exit status 2 (334.214818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-203991 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-203991 --alsologtostderr -v=1: (1.113197407s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-203991 -n newest-cni-203991
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-203991 -n newest-cni-203991
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (16.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-665212 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-665212 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.170413304s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 20:38:59.894760   16607 retry.go:31] will retry after 1.155517783s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context auto-665212 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (16.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (90.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m30.215167871s)
--- PASS: TestNetworkPlugins/group/calico/Start (90.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1009 20:39:20.866046   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m16.761272174s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-lx2fn" [970081a8-f5ab-479a-a0cb-bc54f2230e97] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004319026s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m12.387877502s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-665212 "pgrep -a kubelet"
I1009 20:39:34.825360   16607 config.go:182] Loaded profile config "kindnet-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-665212 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8dnzg" [a41a803b-6022-41b2-872b-88a433560543] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8dnzg" [a41a803b-6022-41b2-872b-88a433560543] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005436784s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-665212 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m25.927458608s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d57pq" [8b22012a-825a-4582-95da-eb0f90822878] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005116828s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-665212 "pgrep -a kubelet"
I1009 20:40:22.499125   16607 config.go:182] Loaded profile config "calico-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-665212 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gh2q9" [d6170332-3b7c-44fa-8885-8640f0a38834] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gh2q9" [d6170332-3b7c-44fa-8885-8640f0a38834] Running
E1009 20:40:32.551527   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/no-preload-480205/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005705776s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-665212 "pgrep -a kubelet"
I1009 20:40:33.405455   16607 config.go:182] Loaded profile config "custom-flannel-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-665212 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p5mf2" [0f74d02e-8ba0-4839-8ebd-970f2af9c5e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p5mf2" [0f74d02e-8ba0-4839-8ebd-970f2af9c5e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.005241995s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-665212 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-665212 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-665212 "pgrep -a kubelet"
I1009 20:40:45.829228   16607 config.go:182] Loaded profile config "enable-default-cni-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-665212 replace --force -f testdata/netcat-deployment.yaml
E1009 20:40:46.424878   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:46.431315   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:46.442740   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:46.464238   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:46.505778   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:46.587754   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:46.749179   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:149: (dbg) Done: kubectl --context enable-default-cni-665212 replace --force -f testdata/netcat-deployment.yaml: (1.234933933s)
E1009 20:40:47.072455   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
I1009 20:40:47.076748   16607 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1009 20:40:47.235693   16607 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rvgpd" [b8fe6658-64de-4c10-a7c3-734bbacf0d85] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1009 20:40:47.714144   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:48.999217   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:51.560862   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rvgpd" [b8fe6658-64de-4c10-a7c3-734bbacf0d85] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004272612s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1009 20:40:53.915237   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:53.921912   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:53.933337   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:53.954787   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:53.996629   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:54.078075   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:54.240269   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:54.561706   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:55.203046   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:56.484681   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
E1009 20:40:56.682534   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/old-k8s-version-169021/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-665212 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m26.110041027s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-665212 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xzxln" [6c8e0cfb-ffbf-434c-8019-2558df41f9f1] Running
E1009 20:41:34.891546   16607 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/default-k8s-diff-port-733270/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004490983s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-665212 "pgrep -a kubelet"
I1009 20:41:35.599880   16607 config.go:182] Loaded profile config "flannel-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-665212 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6n8hj" [466c45ef-7091-40b6-b47e-fd1e73f1b661] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6n8hj" [466c45ef-7091-40b6-b47e-fd1e73f1b661] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003799408s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-665212 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-665212 "pgrep -a kubelet"
I1009 20:42:20.043916   16607 config.go:182] Loaded profile config "bridge-665212": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-665212 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-njsr8" [9d46a421-f3dc-44ff-bb76-6fe4a5d4143b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-njsr8" [9d46a421-f3dc-44ff-bb76-6fe4a5d4143b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003565053s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-665212 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-665212 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.28
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
116 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
118 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
119 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
120 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
257 TestStartStop/group/disable-driver-mounts 0.15
265 TestNetworkPlugins/group/kubenet 3.28
273 TestNetworkPlugins/group/cilium 3.23
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-421083 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-324052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-324052
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-665212 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-665212" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-665212

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-665212"

                                                
                                                
----------------------- debugLogs end: kubenet-665212 [took: 3.147943501s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-665212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-665212
--- SKIP: TestNetworkPlugins/group/kubenet (3.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-665212 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-665212" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19780-9412/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 09 Oct 2024 20:05:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.72.252:8443
name: cert-expiration-261596
contexts:
- context:
cluster: cert-expiration-261596
extensions:
- extension:
last-update: Wed, 09 Oct 2024 20:05:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-261596
name: cert-expiration-261596
current-context: cert-expiration-261596
kind: Config
preferences: {}
users:
- name: cert-expiration-261596
user:
client-certificate: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/cert-expiration-261596/client.crt
client-key: /home/jenkins/minikube-integration/19780-9412/.minikube/profiles/cert-expiration-261596/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-665212

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-665212" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-665212"

                                                
                                                
----------------------- debugLogs end: cilium-665212 [took: 3.084640943s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-665212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-665212
--- SKIP: TestNetworkPlugins/group/cilium (3.23s)

                                                
                                    
Copied to clipboard